Using C function from Python

ctypes is a python library which allows using C data types, functions from a python script. It’s in the standard python library. To use C functions using ctypes, you will need to compile the C code and create a shared library.

add.c


#include <stdio.h>

int add_two_numbers(int num1, int num2) {
    return num1 + num2;
}

I will be using a very simple C function in this case which adds two given numbers

Now compile this file using:
gcc -fPIC -shared -o libadd2nums.so add.c

This will create a shared library named libadd2nums.so which, for now, contains only one function.

add.py


# coding=utf-8

import ctypes

_add = ctypes.CDLL('/home/vivek/ctypestuts/libadd2nums.so')
_add.add_two_numbers.argtypes = (ctypes.c_int, ctypes.c_int)

def add_two_numbers(num1, num2):
   ''' Adds two numbers '''

   return _add.add_two_numbers(ctypes.c_int(num1), ctypes.c_int(num2))

I am using fedora 26. If you are using Windows, you will need to use ctypes.WinDLL.

_add here, is the shared library and we can access the C function using dot(.) .

main.py


# coding=utf-8

import add

num1 = int(raw_input("Enter num1: "))
num2 = int(raw_input("Enter num2: "))
print add.add_two_numbers(num1, num2)

That’s it.

Advertisements

Realtime Events using Tornado and Rabbitmq

At my day job, i needed a way to send real time events to clients that would in turn trigger some action on their side. The clients could ask for some computation from server which may take time.

To tackle this situation, i ended up making Tornado as a websocket server which will be different from our web app server (and both behind nginx). There are a couple of other services which the client may ask for indirectly. Since, those computations won’t have normal request – response cycle, the results from the computations will have to pushed to the clients. Since, the communication between the client and server is two-way so, websocket seemed fitting. For routing of messages internally, i decided to use Rabbitmq
and Celery for actual execution of tasks.

The problem with this is: Rabbitmq consumer and Tornado both run their own I/O loop. That confused me a little because i had heard this combo worked for zulip when i was randomly reading about their architecture. So, i duckduckgoed(:D) and found this article: https://reminiscential.wordpress.com/2012/04/07/realtime-notification-delivery-using-rabbitmq-tornado-and-websocket/   . It turns out he also had a similar doubt and he got a solution.

Pika library comes with a tornado adapter named TornadoConnection. This makes running the rabbitmq consumer loop inside the tornado IOloop itself. The code for tornado connection is fairly simple. As the code given in the blog wasn’t fully functional, i had to contact the source code of pika a couple of times.

Each websocket connection in tornado gets a unique WebSocketHandler object and these are not directly accessible from the tornado application object. But, the reverse is true. Each websocket handler has access to the application object. So, using TorandoConnection, we tie up one pika consumer to the tornado application object.

server.py


def main():
    ''' The main method to run the tornado application '''

    io_loop = tornado.ioloop.IOLoop.instance()

    pc = PikaConsumer(io_loop)

    application.pc = pc
    application.pc.connect()
    application.listen(8080)
    io_loop.start()

consumer.py


class PikaConsumer(object):
    ''' The pika client the tornado will be part of '''

    def __init__(self, io_loop):
        print 'PikaClient: __init__'
        self.io_loop = io_loop
        self.connected = False
        self.connecting = False
        self.connection = None
        self.channel = None
        self.event_listeners = {}

    def connect(self):
        ''' Connect to the broker '''
        if self.connecting:
           print 'PikaClient: Already connecting to RabbitMQ'
           return

        print 'PikaClient: Connecting to RabbitMQ'
        self.connecting = True

        cred = pika.PlainCredentials('someuser', 'somepass')
        param = pika.ConnectionParameters(
            host='localhost',
            port=5672,
            virtual_host='somevhost',
            credentials=cred)
        self.connection = TornadoConnection(
                    param,
                    on_open_callback=self.on_connected)
        self.connection.add_on_close_callback(self.on_closed)

    def on_connected(self, connection):
        print 'PikaClient: connected to RabbitMQ'
        self.connected = True
        self.connection = connection
        self.connection.channel(self.on_channel_open)

    def on_channel_open(self, channel):
        print 'PikaClient: Channel open, Declaring exchange'
        self.channel = channel
        # declare exchanges, which in turn, declare
        # queues, and bind exchange to queues
        self.channel.exchange_declare(
              exchange='someexchange',
              type='topic')
        self.channel.queue_declare(self.on_queue_declare, exclusive=True)

    def on_queue_declare(self, result):
        queue_name = result.method.queue
        self.channel.queue_bind(
        self.on_queue_bind,
        exchange='someexchange',
        queue=queue_name,
        routing_key='commands.*')
        self.channel.basic_consume(self.on_message)

    def on_queue_bind(self, is_ok):
        print 'PikaClient: Exchanges and queue created/joined'

    def on_closed(self, connection):
        print 'PikaClient: rabbit connection closed'
        self.io_loop.stop()

    def on_message(self, channel, method, header, body):
        print 'PikaClient: message received: %s' % body
        self.notify_listeners(body)
        # important, since rmq needs to know that this msg is received by the
        # consumer. Otherwise, it will be overwhelmed
        channel.basic_ack(delivery_tag=method.delivery_tag)

    def notify_listeners(self, event_obj):
        # do whatever you wish
        pass

    def add_event_listener(self, listener):
        # listener.id is the box id now
        self.event_listeners[listener.id] = {
                'id': listener.id, 'obj': listener}
        print 'PikaClient: listener %s added' % repr(listener)

    def remove_event_listener(self, listener):
        try:
            del self.event_listeners[listener.id]
            print 'PikaClient: listener %s removed' % repr(listener)
        except KeyError:
            pass

    def event_listener(self, some_id):
        ''' Gives the socket object with the given some_id '''

        tmp_obj = self.event_listeners.get(some_id)
        if tmp_obj is not None:
            return tmp_obj['obj']
        return None

That’s it. In your WebSocketHandler objects, you can access the consumer via: self.application.pc

Although, this is working fine for me right now but, i am not fully satisfied with this. At present each connection is listening to a single queue because in rabbitmq one consumer cannot listen to multiple queues.

Running Firefox as kiosk application on RPi3

At my day job, i had to run firefox as a kiosk application on RPi3. In this blog post, i will note down the steps that i did so that i or my team members can refer to it when needed.

I have not used any display manager or Desktop Environment but had to use matchbox-window-manager to make firefox run on full screen.

  1. sudo apt-get install vim (this is just for me, i can’t help it)
  2. sudo apt-get install xorg xutils matchbox-window-manager 
  3. sudo apt-get install iceweasel (this is firefox :p)
  4. sudo raspi-config
    1. Go to boot options and setup auto login for user pi
    2. Change the keyboard layout if you wish to
  5. As a sudo user, do the following steps:
    1. cp -r /home/pi  /opt/
    2. cd /opt/pi
    3. chmod -R a+r .
    4. touch .xsessionrc
    5. chmod a+x .xsessionrc
  6. Open .xsessionrc as a sudo user and put the following lines there:
    1. xset s off # no screen saver
      xset -dpms  # disable some power consumption thingy
      xset s noblank # don’t blank the rpi screen
      matchbox-window-manager &
      while true; do
        firefox –url http://127.0.0.1
      done
  7. Copy .xsessionrc file to /home/pi/ 
    1. cp .xsessionrc /home/pi
  8. Configure .bash_profile to start X server when user logs in:
    1. if [ -z “$DISPLAY” ] && [ -n “$XDG_VTNR” ] && [ “$XDG_VTNR” -eq 1 ]; then
      exec startx
      fi
  9. Install an extension in firefox to apply kiosk mode. The first extension that comes up when you search kiosk in addons works fine

The blog post that helped most in coming up this setup was: http://www.alandmoore.com/blog/2011/11/05/creating-a-kiosk-with-linux-and-x11-2011-edition/

Star a project on pagure

This feature was marked as “wishful” and it was supposed to be a low priority but i implemented it out of frustration. This should be there in the next feature release.

Star feature is already there on github and gitlab. I use this feature on github a lot. If I like an open source project, i star it. You also have a list of all the projects which you have starred which can be helpful if you have come across a project sometime ago and had starred it and you want to know more about it (given that you don’t exactly remember the name, otherwise you can just search). Also, if the project author/maintainer is anything like me, he would love to see the star count rising.

For sometime now, i had been asking people who use pagure often (and hopefully like pagure) to star it on github. The star count of pagure was 96 at the time i started my work on star project feature. Last year, at this time, it was in late 60s.

If you star a project on github, your followers come to know that you have liked a project. They can see that on their github homepage. If they see the project, like it, you already have helped pagure reach more people with almost zero effort. I can’t see one good reason if you like a project that you won’t star it.

Pagure doesn’t have the follow feature and i am not sure it will have in near future. This means the star project won’t have it’s full effect. But, one can star a project, there is a star count, there is a list of people who have starred, there is a list of starred projects of a user.

Here is how you can use this feature:

  1. Log in to pagure and go to a project’s home page.
  2. There is a star button, just beside the fork button. It has a star count just beside it.
  3. Star it if you like the project.

Screenshot from 2017-09-05 00-28-45

Here is where you will find your starred projects:

  1. Log in to pagure.
  2. The drop down on the top right corner will be “My Stars”

Screenshot from 2017-09-05 00-43-16

Here is where you can see who all have starred a particular projects:

  1. Right beside the star button on repo page, we have a star count which actually links to a page which lists all the users who have starred the project.

Screenshot from 2017-09-05 00-42-01