This is a concept I first came across at the beginning of last year that's been getting more and more traction with my thinking the more I work with Crossbar.io.

There are lots of ways to define what Microservices are, what they are for and what they are good at, but the easiest way I've found (to date) to describe the architecture is as a mechanism for partitioning what might otherwise be huge monolithic applications.

If you take the traditional web based application, you essentially end up with some HTML and Javascript running in your browser, talking to a FastCGI process (or similar) on the server. This is all well and good, but the structure doesn't lend itself to organisational or operational scaling, nor more modern requirements such as unit testing and continuous integration.

So, just to trying to visualise Microservices, this is what I have;
The browser talks to some sort of aggregator that splits requests out contextually between applications (or distinctly partitioned functions) which actually service requests. Typically the applications will share some sort of common database, whether a single unit or a federated / replicated cluster.

Why is this good?

First off, because each application is distinct, although providing part of a larger solution, it means development can be shared out easily and so long as there is a shared API specification, different groups can almost work on their own application autonomously without reliance on other groups. Imagine working in a Agile team where there are no external blockers (!)

Second, if you are working with an autonomous application where all external dependencies are defined by a well-known API, writing unit test code becomes relatively easy, in many cases you can almost fall-back to the API itself and just check that all API calls defined within the application produce the correct results.

(Ok, so there are some unit-testing .. people .. out there that are going to say that I don't understand what unit test are or what they're for .. however, in real life, most of the unit tests I see just check that distinct functions do what the programmer expects them to do, and don't fail in the way the programmer expects them to fail .. the whole point of testing for me is to check the stuff the programmer isn't expecting .. after all, if the code doesn't do what the programmer expects - he's not done his job! IMHO unit tests should be written by QA, not by developers)

Third, development and upgrades. So long as you have a stand-alone application that passes unit-tests based on the API calls it services, upgrading a part of the system in isolation should be somewhat easier than trying to replace everything in one go, it certainly reduces the number of places to look should something be broken following an upgrade. As Microservices tend to be socket based, it should be as easy as stopping the Application, replacing it, then restarting it, all within a number of microseconds. All you need is a little delay/retry loop in the multiplexor in the event that an application isn't available, and you can avoid downtime altogether.


In this context, Crossbar.io is the Multipexor and provides a number of features above and beyone the requirements implied above.

  • The ability to start an application instance on a number of different nodes simultaneously, then have the multiplexor share requests out between available back-ends. Not only does this solve the restart issue, but it also addresses load-balancing and scalability.
  • Multiple high-level mechanisms for providing services to the front end, both RPC calls and Publish/Subscribe are supported.
  • Websockets, are leveraged for high-speed, low-latency, persistent connections, effectively making proper client-server applications viable.

One of the most anticipated features of Crossbar is the ability to interlink / chain Crossbar instances which should make the potential for scaling Crossbar based applications truly global, and at the same time providing built-in resilience / high-availability.

Crossbar is Open Source .. :)