Realtime Events using Tornado and Rabbitmq

At my day job, i needed a way to send real time events to clients that would in turn trigger some action on their side. The clients could ask for some computation from server which may take time.

To tackle this situation, i ended up making Tornado as a websocket server which will be different from our web app server (and both behind nginx). There are a couple of other services which the client may ask for indirectly. Since, those computations won’t have normal request – response cycle, the results from the computations will have to pushed to the clients. Since, the communication between the client and server is two-way so, websocket seemed fitting. For routing of messages internally, i decided to use Rabbitmq
and Celery for actual execution of tasks.

The problem with this is: Rabbitmq consumer and Tornado both run their own I/O loop. That confused me a little because i had heard this combo worked for zulip when i was randomly reading about their architecture. So, i duckduckgoed(:D) and found this article: https://reminiscential.wordpress.com/2012/04/07/realtime-notification-delivery-using-rabbitmq-tornado-and-websocket/   . It turns out he also had a similar doubt and he got a solution.

Pika library comes with a tornado adapter named TornadoConnection. This makes running the rabbitmq consumer loop inside the tornado IOloop itself. The code for tornado connection is fairly simple. As the code given in the blog wasn’t fully functional, i had to contact the source code of pika a couple of times.

Each websocket connection in tornado gets a unique WebSocketHandler object and these are not directly accessible from the tornado application object. But, the reverse is true. Each websocket handler has access to the application object. So, using TorandoConnection, we tie up one pika consumer to the tornado application object.

server.py


def main():
    ''' The main method to run the tornado application '''

    io_loop = tornado.ioloop.IOLoop.instance()

    pc = PikaConsumer(io_loop)

    application.pc = pc
    application.pc.connect()
    application.listen(8080)
    io_loop.start()

consumer.py


class PikaConsumer(object):
    ''' The pika client the tornado will be part of '''

    def __init__(self, io_loop):
        print 'PikaClient: __init__'
        self.io_loop = io_loop
        self.connected = False
        self.connecting = False
        self.connection = None
        self.channel = None
        self.event_listeners = {}

    def connect(self):
        ''' Connect to the broker '''
        if self.connecting:
           print 'PikaClient: Already connecting to RabbitMQ'
           return

        print 'PikaClient: Connecting to RabbitMQ'
        self.connecting = True

        cred = pika.PlainCredentials('someuser', 'somepass')
        param = pika.ConnectionParameters(
            host='localhost',
            port=5672,
            virtual_host='somevhost',
            credentials=cred)
        self.connection = TornadoConnection(
                    param,
                    on_open_callback=self.on_connected)
        self.connection.add_on_close_callback(self.on_closed)

    def on_connected(self, connection):
        print 'PikaClient: connected to RabbitMQ'
        self.connected = True
        self.connection = connection
        self.connection.channel(self.on_channel_open)

    def on_channel_open(self, channel):
        print 'PikaClient: Channel open, Declaring exchange'
        self.channel = channel
        # declare exchanges, which in turn, declare
        # queues, and bind exchange to queues
        self.channel.exchange_declare(
              exchange='someexchange',
              type='topic')
        self.channel.queue_declare(self.on_queue_declare, exclusive=True)

    def on_queue_declare(self, result):
        queue_name = result.method.queue
        self.channel.queue_bind(
        self.on_queue_bind,
        exchange='someexchange',
        queue=queue_name,
        routing_key='commands.*')
        self.channel.basic_consume(self.on_message)

    def on_queue_bind(self, is_ok):
        print 'PikaClient: Exchanges and queue created/joined'

    def on_closed(self, connection):
        print 'PikaClient: rabbit connection closed'
        self.io_loop.stop()

    def on_message(self, channel, method, header, body):
        print 'PikaClient: message received: %s' % body
        self.notify_listeners(body)
        # important, since rmq needs to know that this msg is received by the
        # consumer. Otherwise, it will be overwhelmed
        channel.basic_ack(delivery_tag=method.delivery_tag)

    def notify_listeners(self, event_obj):
        # do whatever you wish
        pass

    def add_event_listener(self, listener):
        # listener.id is the box id now
        self.event_listeners[listener.id] = {
                'id': listener.id, 'obj': listener}
        print 'PikaClient: listener %s added' % repr(listener)

    def remove_event_listener(self, listener):
        try:
            del self.event_listeners[listener.id]
            print 'PikaClient: listener %s removed' % repr(listener)
        except KeyError:
            pass

    def event_listener(self, some_id):
        ''' Gives the socket object with the given some_id '''

        tmp_obj = self.event_listeners.get(some_id)
        if tmp_obj is not None:
            return tmp_obj['obj']
        return None

That’s it. In your WebSocketHandler objects, you can access the consumer via: self.application.pc

Although, this is working fine for me right now but, i am not fully satisfied with this. At present each connection is listening to a single queue because in rabbitmq one consumer cannot listen to multiple queues.

Running Firefox as kiosk application on RPi3

At my day job, i had to run firefox as a kiosk application on RPi3. In this blog post, i will note down the steps that i did so that i or my team members can refer to it when needed.

I have not used any display manager or Desktop Environment but had to use matchbox-window-manager to make firefox run on full screen.

  1. sudo apt-get install vim (this is just for me, i can’t help it)
  2. sudo apt-get install xorg xutils matchbox-window-manager 
  3. sudo apt-get install iceweasel (this is firefox :p)
  4. sudo raspi-config
    1. Go to boot options and setup auto login for user pi
    2. Change the keyboard layout if you wish to
  5. As a sudo user, do the following steps:
    1. cp -r /home/pi  /opt/
    2. cd /opt/pi
    3. chmod -R a+r .
    4. touch .xsessionrc
    5. chmod a+x .xsessionrc
  6. Open .xsessionrc as a sudo user and put the following lines there:
    1. xset s off # no screen saver
      xset -dpms  # disable some power consumption thingy
      xset s noblank # don’t blank the rpi screen
      matchbox-window-manager &
      while true; do
        firefox –url http://127.0.0.1
      done
  7. Copy .xsessionrc file to /home/pi/ 
    1. cp .xsessionrc /home/pi
  8. Configure .bash_profile to start X server when user logs in:
    1. if [ -z “$DISPLAY” ] && [ -n “$XDG_VTNR” ] && [ “$XDG_VTNR” -eq 1 ]; then
      exec startx
      fi
  9. Install an extension in firefox to apply kiosk mode. The first extension that comes up when you search kiosk in addons works fine

The blog post that helped most in coming up this setup was: http://www.alandmoore.com/blog/2011/11/05/creating-a-kiosk-with-linux-and-x11-2011-edition/

Star a project on pagure

This feature was marked as “wishful” and it was supposed to be a low priority but i implemented it out of frustration. This should be there in the next feature release.

Star feature is already there on github and gitlab. I use this feature on github a lot. If I like an open source project, i star it. You also have a list of all the projects which you have starred which can be helpful if you have come across a project sometime ago and had starred it and you want to know more about it (given that you don’t exactly remember the name, otherwise you can just search). Also, if the project author/maintainer is anything like me, he would love to see the star count rising.

For sometime now, i had been asking people who use pagure often (and hopefully like pagure) to star it on github. The star count of pagure was 96 at the time i started my work on star project feature. Last year, at this time, it was in late 60s.

If you star a project on github, your followers come to know that you have liked a project. They can see that on their github homepage. If they see the project, like it, you already have helped pagure reach more people with almost zero effort. I can’t see one good reason if you like a project that you won’t star it.

Pagure doesn’t have the follow feature and i am not sure it will have in near future. This means the star project won’t have it’s full effect. But, one can star a project, there is a star count, there is a list of people who have starred, there is a list of starred projects of a user.

Here is how you can use this feature:

  1. Log in to pagure and go to a project’s home page.
  2. There is a star button, just beside the fork button. It has a star count just beside it.
  3. Star it if you like the project.

Screenshot from 2017-09-05 00-28-45

Here is where you will find your starred projects:

  1. Log in to pagure.
  2. The drop down on the top right corner will be “My Stars”

Screenshot from 2017-09-05 00-43-16

Here is where you can see who all have starred a particular projects:

  1. Right beside the star button on repo page, we have a star count which actually links to a page which lists all the users who have starred the project.

Screenshot from 2017-09-05 00-42-01

 

Using Celery with Rabbitmq

Rabbitmq is a message broker and celery is a task queue. When you run a celery app, by default, it will open as many processes as there are cores of cpu on the machine. These processes are workers. When you have a task which needs to be done outside of a normal HTTP request-response cycle,  you can use a task queue. Rabbitmq can be configured to decide (and deliver) which worker the task has to go and celery will help in the actual execution of the tasks.

Celery supports a lot of message brokers but, Rabbitmq is the default one. So, setting up celery for using with rabbitmq doesn’t require any effort. If you have rabbitmq already installed then all you need to do is create a rabbitmq user, a virtual host and give the user access to the virtual host. It is given in the celery documentation here.

Then you need to specify the broker url in this format in the celery app.

broker_url = 'amqp://myuser:mypassword@localhost:5672/myvhost'

The default exchange that celery listens to is named ‘celery‘ and routing key is also ‘celery‘. The ‘celery‘ exchange is direct type exchange. AMQP is the protocol that rabbitmq follows. Username, password and virtual host here is of rabbitmq that you want celery to use. Based on the given broker url, celery attempts to know which message broker is being used.

Using Syntastic for Python development

I use Synstastic plugin of vim for syntax checking in vim. Syntastic offers syntax checking for a LOT of languages. But, there is a problem that i had been facing with it. For a file with larger than 4k lines, it takes a lot of time to check the syntax and it used to happen every time you save the file. Syntax checking on write operation is the default behavior.

So, i did some changes in my .vimrc so that i could still use Syntastic for larger files. Do note that syntastic checking still takes a long time but, i have configured it to be called whenever i want to rather than on every write operation or opening of file.

” show list of errors and warnings on the current file
nmap <leader>e :Errors<CR>
” Whether to perform syntastic checking on opening of file
” This made it very slow on open, so don’t
let g:syntastic_check_on_open = 0
” Don’t check every time i save the file
” I will call you when i need you
let g:syntastic_check_on_wq = 0
” By default, keep syntastic in passive mode
let g:syntastic_mode_map = { ‘mode’: ‘passive’ }
” Use :Sc to perform syntastic check
:command Sc :SyntasticCheck
” Check pylint for python
let g:syntastic_python_checkers = [‘pylint’]
” For jsx – React and React native
let g:syntastic_javascript_checkers = [‘eslint’]

This change made opening of a larger python file ~25s (yes, seconds) faster. It still takes a lot of time for syntax checking though. I will have to find out why and if i could do anything about it. I don’t want to leave out this plugin because it offers so much. I could simply use Python-mode for python syntax checking but, what about the rest of the languages which i am going to use.

Sending Emails using Django and Sendgrid

Recently, i setup a django app which uses  sendgrid to send emails. I will go through the steps in this short blog.

  1. Register at sendgrid
  2. Choose SMTP for sending emails
  3. Get an API key. The last step will redirect you to this. This key will also be your password. The username that they gave me was apikey so, i guess this remains same for everyone.
  4. Configure your django settings to this:
    EMAIL_HOST_USER = ‘<your username here>’
    EMAIL_HOST = ‘smtp.sendgrid.net’
    EMAIL_HOST_PASSWORD = ‘<your password here>’
    EMAIL_PORT = 587
    EMAIL_USE_TLS = True
    EMAIL_BACKEND = ‘django.core.mail.backends.smtp.EmailBackend’  (this is the default value of EMAIL_BACKEND btw)
  5. Use django.core.mail.send_mail for sending emails now

Django with uwsgi and nginx on fedora

Today, i deployed a django project using uwsgi and nginx on a fedora 26 instance on AWS. I will talk about the same here.

I had used gunicorn in the past but never uwsgi. Getting started with gunicorn was a little bit easier for me than uwsgi primarily because i didn’t know i had to install uwsgi-plugin-python package to use uwsgi with django. This took me a while because, there were no errors. There was a “no app could be loaded” problem but, on internet most of this kind of error is for flask. Flask exposes it’s application object as app and uwsgi looks to load application which it fails to find.

The steps are:

  1. Install dependencies and the project itself:
    • sudo dnf install uwsgi uwsgi-plugin-python nginx
  1. Create a configuration file for uwsgi: uwsgi.ini
  2. Change nginx config to pass on to uwsgi for incoming requests on /ethcld/ (the mount point that i used)

 

Here is my uwsgi file:

[uwsgi]
chdir = /home/fedora/ethcld
plugin = python
# Django’s wsgi file
# module = ethereal.wsgi:application
mount = /ethcld=ethereal.wsgi:application
manage-script-name = True
master = True
# maximum number of worker processes
processes = 4
# Threads per process
threads = 2
socket = 127.0.0.1:8001
# clear environment on exit
vacuum = true

uwsgi asks for the directory of the project. In my case, it was /home/fedora/ethcld/. Mount is optional, if you want to run the application under some namespace, you will have to use mount. Also, if mount is getting used, you should not need module.

Manage Script name (manage-script-name)is important while mounting otherwise you will get a bunch of 404s. Usually, the request comes for something like: GET /username/  but, without manage-script-name option, nginx will not map /username/ to /ethcld/username.

For socket, i could have used unix socket instead of port one. But, somehow i settled for port.

On the nginx side, i did the following changes:

http {

       include /etc/nginx/mime.types;

       server {

                location ^~/static/ {
                       autoindex on;
                       alias /opt/ethcld/static/;
                }

                location ^~ /ethcld {
                        include uwsgi_params;
                        uwsgi_pass 127.0.0.1:8001;
                }

         }

}

Nginx supports uwsgi protocol by default, so options are already there, we just need to call them. For serving static files, for django, it is recommended that we do:

  • python manage.py collectstatic

This will copy (by default) all the static files in the django app to the location specified in STATIC_ROOT in settings. From there we can serve the static files. There are possible optimizations that can be done like gzip. But, i did this for testing only.

You need to include mime.types, otherwise browsers will keep rejecting files. By default, the mime type is ‘text/plain’.

Using IRC from Mobile

In this blog post, i will talk about how i use IRC from mobile.

I have been using weechat as my irc client for about a year now and i am happy with it. Generally, i used to open weechat in one tmux session and project (or projects) in other tmux sessions in my machine and stay on IRC while working. This serves well except for the fact that when you are not connected with internet, nobody on irc can leave a message for you.

I didn’t know about weechat’s relay feature until very recently when maxking was talking about this in #dgplug. In this feature, weechat listens to client connections to a specified port that allows two way communication between connected clients and weechat. The clients can be any device with internet access. The only thing left then for this setup to work was an android app. There were two options, glowing bear and weechat-android. The former didn’t support ssl so it was out of picture.

A few weeks ago, i got a free tier fedora 26 instance on AWS. I had to use this for testing other applications that i was working on. Also, AWS doesn’t allow(atleast for my instance) HTTPS connections for ports other than 443. I wanted to use ssl and that too for more than one application and thus decided using nginx as reverse proxy.

Here are the list of things that i did:

  1. Installed nginx, tmux and weechat on aws instance.
  2. Created self-signed certificate and pointed nginx to that.
  3. Configure weechat to relay on port 9001 and configure nginx for websocket connections on 443 and proxy it to 9001.
  4. Use weechat-android to connect to the relay using websocket(ssl) as connection type.

I used tmux so that i could ssh into the aws instance and join the tmux session. This is where the client-server architecture of tmux helped. I couldn’t use let’s encrypt or ACM for ssl certificate because i didn’t have domain name for that public IP. Creating self-signed certificate is surprisingly easy and this Digital Ocean blog helped.

I also use urlserver.py plugin for weechat which shortens url for you. It runs a small server on the system and provides redirects to the original link. With nginx, i am able to configure urlserver.py to run on one port and point nginx to that.

I am not fully happy with weechat-android though. Most of the time it works but, sometimes it disconnects right after connecting without saying anything. Other people have found this too here. Atleast, people can drop any message for me now.

PyCon 2016

I come from a place  where everyone worships competitive coding and thus cpp, so the experience of attending my first pycon was much awaited for me.

This year’s PyCon India happened in Delhi and i along with a couple of my friends reached on 23rd September, the first day. We were a bit late but it was all right because, we didn’t miss anything.

Day 1

We had workshops and devsprints on the first day. I, along with Farhaan, were taking a devsprint for Pagure project. It was nice to see a couple of new contributors whom i meet on IRC asking for help and trying to make a contribution. It went all day long but i did manage to roam around a little.

Sayan came to the spot where we were sitting for devsprint with a camera in hand. I don’t know if he was hired as an official photographer for PyCon or not but if he wasn’t i am sure he must have clicked more photographs than all the rest of people combined.

I am not a sort of person who meets a lot of people. I, generally, feel awkward meeting new people and it’s not easy for me to get comfortable with anyone. With Farhaan and especially Sayan, this wasn’t the case. They made me comfortable in our first encounter. In fact, the most shocking thing about PyCon was simplicity of the people. I expected them to be nice but they were better. 🙂

I then attended Pycharm workshop. Sayan was sitting in the first row and i and Farhaan joined him there. This workshop turned out to be really funny because we were comparing how we did things in Vim to the corresponding method in Pycharm. Pycharm does make things a little easier but moving the hand to mouse/touchpad for every little thing is too much to ask. We came to know that Sayan uses arrow keys instead of h/j/k/l in Vim (and he told me not to judge him for this :-p).

At the end of day one, I, my friend Saptak(saptaks), Farhaan, Sayan and Subhendu (subho or Ghost-Script) decided to visit Ambience Mall which was nearby, for some food. We ended up at KFC where we spend some time and  got to know each other better. Sayan told us about his academic history and  Hackerearth. We also talked about his experience of flock this year. After this, Sayan told me that i was a completely different person than i thought i would be :-p .

Day 2

It was a busy day for everyone. There were a lot of interesting talks. I managed to get a ticket to PyCon for one of my friend who sat  all day, the day before in the hotel room. At the end of the last day, i had joined the volunteers group since, most of the dgplug members were in the volunteers group and i didn’t want to miss anything. So, i was basically moving around during the talks doing small things. I couldn’t really concentrate on any particular talk except talk whose title read: ‘Helix and Salt: Case study in high volume and distributed python applications’, given by a linkedin guy named Akhil Malik and i didn’t understand much. At the end of the talk, Saptak asked me if i wanted to have a cold drink to which i replied: “I will need a harder drink than that.”. I didn’t realize Kushal was sitting just a row behind me and could easily have listened to this conversation.

At the end of second day, all the volunteers were supposed to have dinner together and i was supposed to meet a close friend of mine who lives in Delhi. Thankfully, I managed to do both but, i missed the starters :/ . The four of us from last night were joined by Farhaan and since, me and Farhaan both were present, the talk naturally shifted to our GSoC experiences. Sayan told me about writing blogs more often and he did have some valid points.

cttei5euaaawhj4-jpglarge

Day 3

 It was the last day and somehow i was feeling a little low because… i didn’t want it to end. I wasn’t interested in talks anymore. But, i did attend the lightening talks and was roaming rest of the time. There was a dgplug “staircase” meeting which i attended. Kushal was leading the talk and he was surrounded by about 30 people, most of whom hadn’t started their FOSS journey. He talked mainly about how they should start with something small and they will get better. It is a really nice initiative, i personally feel.

I had met Kushal last night at dinner and he said he had something to give to me and Farhaan. Just before the lightening talk, i was sitting in the second row and he came to me and stood by my side. He told me to be seated and gave me dgplug sticker and a set of stickers of fedora. This was a nice moment for me. Earlier that day, he had mentioned me and Farhaan for our contributions to pagure in his talk. In the small period of time that i have seen or been with Kushal, he has managed to earn a lot of respect from me.

At the end of this day, everybody had to leave so, the four people from first day and one other friend of mine, decided to visit the same mall once again, for food and roaming around. The Cab driver that we booked turned out to be very patriotic and got offended by my comment that we didn’t study at JNU (where the event was held) and we were “foreigners”. He kept talking about me being a foreigner the whole ride, no matter how many times i said i didn’t mean it. Obviously, everyone enjoyed it.

At the mall, there were no juniors or seniors. There were just 5 (single) computer science guys: being mean, pulling each other’s legs and talking about stuff i shouldn’t mention here. It turned out that Sayan and Subhendu got to know a few of my negative points as well. Sayan also managed to ask Saptak, me and Shubham (my other friend) to start contributing to fedora hubs.

Overall, it was a great experience meeting people whom i talk to on IRC. I won’t be able to mention all the people whom i have met during the event but it doesn’t matter. The important thing is that i enjoyed a lot and i am able to connect with them better.

GSoC Wrap Up

GSoC 2016 finished last week and i am writing this blog to list the work done by me in last three months for Fedora. My project was to adjust pagure and write script(s) so that we can have pkgs.fedoraproject.org on a pagure instance. We have it in staging currently http://pkgs.stg.fedoraproject.org/pagure/

https://pagure.io/pagure/pull-request/1007

https://pagure.io/pagure/pull-request/1035

https://pagure.io/pagure/pull-request/1036

https://pagure.io/pagure/pull-request/1045

https://pagure.io/pagure/pull-request/1050

https://pagure.io/pagure/pull-request/1058

https://pagure.io/pagure/pull-request/1071

https://pagure.io/pagure/pull-request/1094

https://pagure.io/pagure/pull-request/1095

https://pagure.io/pagure/pull-request/1097

https://pagure.io/pagure/pull-request/1114

https://pagure.io/pagure/pull-request/1120

https://pagure.io/pagure/pull-request/1149

https://pagure.io/pagure/pull-request/1150

https://pagure.io/pagure/pull-request/1151

https://pagure.io/pagure/pull-request/1157

https://pagure.io/pagure/pull-request/1177

https://pagure.io/pagure/pull-request/1210

https://pagure.io/pagure/pull-request/1211

https://pagure.io/pagure/pull-request/1219

https://pagure.io/pagure/pull-request/1218

https://pagure.io/pagure/pull-request/1158

Besides these, there is a script for getting user acls from pkgdb:

https://infrastructure.fedoraproject.org/cgit/ansible.git/commit/?id=de67bcbea22bb4539e32d195a10448948bc6d765

For me, the experience has been perfect. I like the work environment at #fedora-apps. My mentor, Pierre-Yves Chibon is nice to everyone and i hope i haven’t annoyed or disappointed him in last 3 months. It’s hard to find a person who can guide so patiently. I am saying this not because i see one of my friend working for FOSS Asia but, because he is genuinely good.

If GSoC wasn’t there, even then i would have spent my last 3 months in the same way (without mine and my father’s new mobile phone). I contribute here because i like the work environment that they have created and i get to learn new things while working on real life projects.

So, thanks Google for the money and Fedora for such an awesome experience.