Microservices II

Although I'm working in an 'actual' application at the moment, the main thrust of the effort is aimed towards developing a web development framework to make the development of successive applications more efficient and more consistent.

I attended a DevOPS meetup this week hosted by AlertLogic, who it seems have developed their own framework from scratch to achieve what appear to be similar goals to my own. It's an interesting contrast, they have a combination of various tools like RabbitMQ, tied together with, of all things, Erlang.

Anyway, it's made me just stop for a second and document how my microservices framework is looking, and this is where I am at the moment. Everything is based around Crossbar.io and Autobahn and runs over Websockets.


Crossbar runs on it's own based on a JSON configuration file which defines all manner of things from the port, address, SSL certificates etc, down to authentication mechanisms and realms. I'll leave this for now as it's fairly heavily documented on the the Crossbar website.

I then have microservices that connect to Crossbar, each Microservice essentially represents a Python process / execution thread, and you can run as many of these as you wish. A Microservice is defined as follows;

"""generic Microservices shell."""
import ionman

class MicroService(ionman.microservice):
    """My Microservice."""
    Modules = ['rpc', 'get', 'msg', 'demo']

application = MicroService()

And can be started with "./filename.py", or you can add a "background" if you want to run via twistd for service level daemon execution. For production and management of twistd processes I use svc - this seems to work fairly well thus far although I'm on the lookout for something better.

In this instance the source for my Microservice is split into a number of source files (rpc.py/get.py/msg.py/demo.py), and the actual worker code looks like this;

"""Demonstration MicroService Class (demo.py)."""
from IonmanClass import SubscriptionsClass, RegistrationsClass

class Subscriptions(SubscriptionsClass):
    """Local Topic Subscriptions."""
    def __init__(self, conf=None, extra=None):
        """Call the superClass first."""
        SubscriptionsClass.__init__(self, conf, extra)

class Registrations(RegistrationsClass):
    """Local RPC Call Methods."""
    def __init__(self, conf=None, extra=None):
        """Call the superClass first."""
        RegistrationsClass.__init__(self, conf, extra)

    def app_demo_method1(self, argv, details):
        """Non descript demonstration method."""
        user = self.getSession(details)
        if not user:
            return self.fail('not authorized')
        msg = 'User {} called method1'.format(user.name)
        return self.ok({'result': msg})

Essentially this Microservice is presenting a single RPC method that can be called from any connected client (or other microservice) , subject to authentication and authorization in play for that method, via the name app.demo.method1. Any parameters passed via the RPC call are available in params, and I use JSON based parameters, so you can pass back and fore pretty much any data structure you want.

There are a bunch of scheduling routines available too so you can run the same microservice a number of times (from the same or different hosts) such that Crossbar will load balance requests out to different Microservice instances for the same Topic. i.e. if you run the Microservice three times, and set up the Round Robin Scheduler, then call app.demo.method1 three times, the method will execute once on each of the three instances.

Starting the first (.py) file takes care of connecting, registration, authorization etc, and is driven by a config file. When you kill the process, it automatically deregisters all methods, subscriptions, registrations and so forth.

A simple config.ini file looks like this;

debug = False
realm = ionman
user = myserviceusername
password = myservicepassword
port = 8443
host = localhost
title = Demo Microservice
logfile = mylogfile

I'm still making things cleaner and there are still things I can do with regards to launch and process management, but it's certainly coming along, and the technology it's sitting on is very mature so even in development it's pretty bulletproof. (i.e. it's never crashed on me!)

All provisioning is via Ansible from a local source push. This needs a rewrite to package a .deb based on a GitHub release, but it's going to have a wait a little while.

More to come, currently I'm recoding all the front-end/GUI code around JSClass so that everything is re-entrant and the Navigator can eat it's own tail .. it seems you never know when you want to interrupt the current input flow and re-open an already live form, but in a different context, without losing your place :)

Ok, just to tack on a little context, if I wanted to access the method from some Javascript code running in the browser, it would look like this;

function check_remote() {
  function success(data) {
    if(data.status != 'OK') return error();
    console.log('Success: ',data.result);
  params = {'dummy':'none required'};
  topic = 'app.demo.method1';

And because it's websockets, this is a very small low-latency transaction over an already connected socket, not a 1k Gob of headers and response .. I know I say between 5x and 20x faster, but in real life, it's generally 20x ;)