I've been glancing sideways at GraphQL for a number of years after hearing it described and not fully understanding 'exactly' what it was or how to use it ...


I've been glancing sideways at GraphQL for a number of years after hearing it described and not fully understanding 'exactly' what it was or how to use it. All the while thinking I must take a look sometime. Now I have and I rather like what I see ...

If I had to describe it to myself, before having used it, I might say;

Imagine you're using an SQL database, and that you had a native SQL client running directly in your browser, so you could effectively do away with the "API" layer you're currently required to traverse in order to get data from your back-end.

Ok, so an over-simplification, but to an extent it gives you an idea. In terms of transports, endpoints etc, effectively you implement one end-point that resolves Queries (the QL in "GraphQL") and your old REST (or whatever you were using) API is effectively toast. So as a very simplistic representation or REST vs GraphQL;

Implementations on Test

Just to clarify before going any further, there are a number of different ways to implement GraphQL depending on your choice of infrastructure, language etc. There might seem to be a number of recommended databases and transports (none of which I want to use) however with a little bit of additional work, you can pretty much use whatever you want. In context, here I'm using Python on the back-end, Autobahn/WAMP as the transport, and PyNNDB2 as the database. In particular I'm using the graphene GraphQL library for Python. I don't know if this is the best library, but it certainly seems fairly accomplished, competent and well maintained.

I'm assuming you have some ideas with regards to what an API looks like, so in comparison my "one" RPC endpoint is looking like this;

async def graphql_query(self, graphQLParams, details=None):
        return schema.execute(

Before I go any further, you can find all my example code here;

So if you look at "" this is all you need from a Python / Autobahn perspective to implement the serving of GraphQL over an Autobahn / websocket connection. (although I'm making the assumption that you have a running Crossbar server, if you look at my "config.json" file this is all you need to get a local Crossbar instance running)

To make the connection on the client side I'm using the graphical UI  "GraphiQL" which seems to be the reference UI implementation for GraphQL and is written in Javascript / React. Now I'm not a React guy, however it only took around half an hour to get this working against my transport, everything you need is in "App.tsx". If you grab a copy of GraphiQL and take a look in the examples/graphiql-create-react-app/src folder, just Replace replace App.tsx with my version, "npm install autobahn-browser", build with "yarn && yarn start", and you should have a GraphQL User Interface running on http://localhost:3000.

So, once you have all this running, you're left with the Query Language you need to use in the client to request data and the Schema / Resolvers on the server side used to provide the data. The intent here I think is to automate the extraction (and insertion) of data, while at the same time keeping the Schema definition partitioned from the Resolvers, so you can literally switch database implementation (or indeed transport) just by re-implementing a discrete layer without making any adjustments to the Schema.

Note on Schemas

The Schema implementation is typed, but unlike some other systems I think this would be described as a code-first approach rather than an api-first approach. Couple of key differences, in particular the Schema is implemented in code rather than in some sort of abstract DDL, which means you don't need to deal with stubs, code generators or other such inconveniences. As the Schema is typed it also means that not only can the server do implicit query validation without the need for each API call to have it's parameters explicitly checked, but we're also able to do a reasonable amount of introspection, something the User Interface takes full advantage of.

Our test Schema

I've taken the test / example data as used in GraphiQL and based my examples on that, so if you've played with the reference examples, this should look pretty familiar. Just to clarify what we're working with looks something like this;

Note the separation server side between the transport receiver, the database Schema, and the actual database Interface. This feels like a very clean approach which is easy to read, implement and debug.

At a very basic level the Schema is implemented via Python classes which support some basic Scalar datatypes as provided by GraphQL, with the ability to utilise some custom data types as provided by Graphene, or indeed implement your own custom types. So far I'm just going with basic types as per the examples.

We're have conceptually with two different actors, Humans and Droids which have a number of attributes in common and one attribute that's unique to each.  The basic data definition looks like this;

class Character(graphene.Interface):
    id = graphene.ID()
    name = graphene.String()
    friends = graphene.List(lambda: Character)
    appears_in = graphene.List(Episode)

class Human(graphene.ObjectType, interfaces=[Character]):
    home_planet = graphene.String()

class Droid(graphene.ObjectType, interfaces=[Character]):
    primary_function = graphene.String()

When we define the queries we're going to service we can create a query called droid_by_name, which would implicitly require a function called resolve_droid_by_name, and assuming we implemented the resolver for our database, and assuming we've imported the test data with, then we should be able to do something like this;

So looking at the query, "Test" is just a reference name for the specific query for historical purposes (say if we want to use the query again), droidByName is the query name as defined on the server, name is the parameter to pass in terms of filtering the search, and id, name, primaryFunction and appearsIn are the fields we're asking the server to return.

Just to complete the circle, the query definition (which is a part of the schema) looks like this;

droid_by_name = graphene.Field(graphene.List(Droid), name=graphene.String())

def resolve_droid_by_name(root, info, name):
        return resolver.actors_by_name(name)

And to follow the white rabbit, resolver.actors_by_name simply takes the name parameter and uses it to look up the appropriate data in the database, and return a list of Droid objects with a matching name. In our implementation we're actually doing a partial match on name, so it returns any droids with a name beginning with name.

In this instance, here's our resolver, for the full code take a look at, but hopefully this demonstrates a fairly good level of separation between end-point, query resolver and actual data.

    def actors_by_name(self, name):
        results = []
        for result in self._t_actors.filter('by_name', lower=Doc({'name': name})):
            if not result.doc._name.startswith(name):
        return results

One bit I deliberately left out above in the schema relates to friends. We're actually storing the primary key of each friend, so when we come to deliver query results involving friends, we need to resolve a list of primary keys to a list of actual friend objects. We do this by adding a local method to the Character class;

class Character(graphene.Interface):
    id = graphene.ID()
    name = graphene.String()
    friends = graphene.List(lambda: Character)
    appears_in = graphene.List(Episode)

    def resolve_friends(self, info):
        """Resolve a list of friend id's to a list of actor 'objects'"""
        return [resolver.actor_by_id(f) for f in self.friends]

And then we need a resolver that will accept a primary key and yield the associated object (materialise just converts a raw JSON object into a Python object of the required type);

def actor_by_id(self, id):
        doc = self._t_actors.get(id)
        return self.materialise(doc)

So if we go back to the previous example and also ask for the "friends" field, although in the database the friends field is a list of primary keys, what we actually see is;

Ok, I could go on, a lot .. but in addition to the basic examples I implemented some add / remove record functionality, just to see how hard it is. Pretty much the same, relatively straightforward / modular at both ends with good separation between the layers. In terms of the query language itself, it's not SQL, but for what I would probably want from the perspective of browser access, the combination of being able to design the query interface and the actual query language, it seems to provide a pretty good balance of complexity vs simplicity and I'm not seeing anything that jumps out as "that's going to be hard".

So this is what adding and deleting data looks like, also showing a couple of additional features of the UI. Firstly you can add a number of queries on the left, then when you try to run, it asks you which one you want. (like add or delete) Secondly, on the right you can see introspection in action. What you don't see in general is that when entering a query, it's doing auto-complete selections based on the schema that it downloads from the server when it first connect. It's a more interesting tool than first meets the eye ...

Anyway, don't be offended that I've not used your transport or your database in these examples, the whole point is that you can pretty much use whatever you want for each with relative ease, and because the code is layered, switching transport or database is a relatively low-cost exercise compared to converting what one might class as a monolithic API.

Pondering now whether I can use GraphQL as a base for a new Graphical Database Explorer by realising an Python Schema from a base data-set ...


Just a note; Subscriptions looks like another interesting feature and I'm still thinking about the practicalities, but from a pure implementation perspective, it doesn't look to be any more complex than query and mutate.

GraphQL feels like it has a lot of potential, I might post again when I can pin down exactly why :)