Rather than just be a theoretical discussion about API Gateway, I want to bring in some real world stuff here. I thought the best way to do this is to go through a couple of implementations but before I do that I need to lay out some background information and thinking behind how we developed our services approach. This will help you to understand things within the examples and also are topics and ideas I hope you find interesting.
When we started our journey we did not set out to build an SOA and at the time “API” wasnt really that cool and no one had really mentioned Micro-services before, in fact Microsoft Azure wasnt even live back then. It was about 2009-ish. I was working with a customer who had underwent a the first part of a fairly big IT transformation project and had achieved some really good things but also had created a bit of a hidden beast in a few places too. In the integration area we had done some pretty good stuff with BizTalk around business process automation but a new phase of work was about to start with this company and there was to be a bit of a drive in building some new web applications which would use services in the organisation. At the same time the organisation was also undergoing a shift from waterfall to agile.
One of the challenges in this organisation was that while there was some pretty cool integration in some places there wasnt really a good approach to using web services and that is something we wanted to address as part of the next wave of work.
As I mentioned earlier at the time some of the things which are buzz topics these days weren’t talked about or weren’t viewed in the same way they are today, but the thing is they are based on some of the common programming principles such as inheritance, abstraction, polymorphism and encapsulation which have always been around. Also with a lot of thinking around dependency management because we know in integration projects this is one of the most challenging areas. With these things in mind we decided that an architecture with components based on the following values would work well for us:
- Small & discreet
- Easy to develop
- Easy to deploy
- Worked well with each other
- Functionality can be moved from one component to another without a huge amount of effort
- Developers should be able to move from one services component to another and the patterns and practices should be the same but the business context would be the only difference
We felt that if we could keep these values in mind then our component vision would give us an architecture to support the long term services landscape required for this organisation now but also would be beneficial for the future.
With this vision in mind we evolved some principles which our services integration was based on:
We decided that our services would be developed as .net services where possible and we created a service container template. This was a standard Visual Studio solution and code structure so that the code base for any components built using this template would all be the same. These components were to be deployed in IIS and this meant it was easy to create a default scripted automatic deployment which would cover all components. This meant that any scrum team developing a service would be developing the service within a service container which were all the same. The benefits of this were that developers could jump between services and they knew exactly what to expect and where to put things. This significantly reduces the cost of ownership of these services.
Any message over any channel
This was a concept which I felt was very important at the time. With our vision to reduce the amount of code we need to create we would often find that in the past you might have a pipeline of services calling each other where each component gets a message and simply transforms it to another format and passes it on. This repeated set of steps means more code to maintain regardless of if you have implemented that in an ESB or custom code. One of the things we wanted to do was to be able to flow a message over any channel regardless of if it is HTTP, AMQP, via BizTalk, RabbitMQ or anything without having to constantly manipulate the message in a very small way at each step unless we had to for functional reasons rather than simply because the technology approach we had used required us to.
I guess this is similar to the Micro services of “smart endpoints and dumb pipelines”
Separate Message Contract from Service Contract
This principle is a follow on from “Any message over any channel”. One of the common things that happened in the previous project phases was that developers had developed services in the standard WCF approach where you define a data contract and a service contract together in a code first fashion. While this works ok it was a contributing factor to the excessive unnecessary code that was being written when mapping service calls. What we wanted to do was being the creation of a central place where we defined messages separate to the components that would use them. We called this the schema repository. We didn’t really have a great tool for doing this so we simply used a visual studio solution which contained a bunch of XSD schema definitions which defined our messages.
The schema repository would then build .net assemblies based on the schema definitions so we would also have a .net type for each message. We accepted that some messages would be easy to define in a canonical fashion and were well understood but also that some might be much more difficult and some may even be application specific in some ways. Even with this in mind we still put all of these into the schema repository and built up a model of our messages.
We also knew that although XSD and XML were popular at the time we also knew that JSON was gaining in popularity too and we wanted to encourage that. With this in mind and the any “message over any channel” principle we wanted to make sure that the architecture could deal with messages in both JSON and XML. Also using the XSD model made it possible for us to use the namespace and root element to uniquely identify a type of message when required and to also implement a versioning approach.
Generic Smart Endpoint
In each Service Container we had our .net services framework which was the basis of the approach used within the component. In the container there would be a single endpoint which was generic and could take any message. Initially we started with just a WCF SOAP endpoint but later we also added support for a generic REST-ish endpoint too. The point here was that you could send any message to the endpoint and the service would internally workout what to do with that message.
This generic endpoint had a number of key benefits. The first one was that the configuration story got much simpler. We no longer had multiple endpoints in every component creating more WCF configuration and more config settings that can be set incorrectly during deployment. For each service implemented with the Service Container there was only one endpoint.
The next benefit of the generic endpoint was that it meant there could be some framework level features which could be implemented under the hood which ensured consistency across components. We used this to implement things like security patterns and logging and error handling approaches. We did not need to rely on functional developers to do these lower level things they just had to follow the pattern that was already there for them. Imagine now that every service you build automatically has the same approach to logging and monitoring. That on its own is a pretty good thing to have.
Another benefit of the generic endpoint was that we build support for XML and JSON messaging into it. This meant that as a consumer of the service you would simply send a message and say “oh by the way the message body is JSON” (or XML) and the generic service endpoint would just handle it. The functional developer building a business feature would be working in a class within the service container which was defined as a handler of that message and they would be working with a .net type and not really have to worry about the whole messaging infrastructure under the hood.
This generic endpoint also offers a degree of polymorphism between services. We could start by sending a message to one service now but in the future we can simply change the routing of messages and send that message to a different component without having to change the internals of any components.
When we started this part of the journey we expected that we would have the following kinds of services:
- Application Services
- API Gateway Services
- Composite Services
- Business Process Services
Application Facade Services
We had a big estate of applications and most of these applications had their own set of web services. Some of these applications were off the shelf packages and some were custom developed. At this point in time they were all on premise but in the future there ended up being SaaS applications too.
For the applications we would usually build a type of service we called an Application Facade Service. The role of the facade was to make it easier to integrate with an application. With the architecture it is possible to have more than one application service facade per application. The role of this component is to take a message and to then handle the message and interact with the application to fulfil the requirements of processing this message. This could involve looking up date and returning a response of making changes to records in the application.
These services would often sit on top of a set of application services and mash together a bunch of application services into a combined service. An example of this might be that if product information was split across two applications there might be a products composite service which would pull together information from both applications to create a message about products which contained information from both apps.
Composite services would tend to be smaller in scope and have a more specific purpose.
API Gateway Services
To us an API Gateway component was a component which would be the entry point to a defined domain. This service was about providing an adapter for a party to be able to work with services inside the domain when they couldn’t easily access our services for one reason or another. One example of where we did this was that we wanted the applications in our DMZ to access internal services via a gateway and in another we wanted external partners and applications outside the organisation to communicate with out internal services architecture via an API Gateway too. I’ll be going through these examples in more detail in a later article.
The API Gateway is really just another implementation of the facade pattern (a simple interface to a complex subsystem) but in this case rather than focusing on encapsulation as the core reason we are thinking more about adapting the subsystem so it’s easier for the consumer of the gateway to work with it.
Process Orchestration Services
We considered process orchestration services to be ones where we would typically need some kind of workflow which would coordinate actions over multiple systems or over a longer running period of time. In our architecture a good way to implement these was to be able to off load these messages to BizTalk and use the orchestration capability in BizTalk. A good example of the flexibility offered by the services approach was that we could begin building the processing of a creating a new customer which would start by sending a message to a service which would update one application then 6 months later the process for creating the customer could become much more complex and we could re-route the message to the generic endpoint exposed by BizTalk and internally this would implement a workflow which would send messages across 3 or 4 applications in a coordinated fashion. Each time a message was sent from BizTalk to the applications it is still possible to do this by using a message which is sent to the generic endpoint of an Application Facade Service or you could also use a native interface for that application which was hooked into it directly from BizTalk. We had lots of flexibility here.
Balance of Services
We expected that Application Services would be difficult to get at the right level of granularity as a first shot. Often some of the applications are pretty big and covering a lot of areas. We would start with a single Application Services Facade and from there its about finding a balance between how many Service Facades I have for an application. The trade off is having more means more deployment complexity but the benefit is that the components are smaller and more specific. Also each time a component needs to make an out of process call then it would increase the latency of processing a message. We expected that most services for each application could be implemented in the default container for each application but then if there was a high risk service or a service which was special in some way we might separate it into its own container.
This polymorphism was a big benefit and allowed us to move around services over time to workout what was the best balance.
One of the benefits of this polymorphic approach was also that if we had a service which had a bug identified in it or needed a new feature to be added we could choose to deploy a new version of it in a new container. If we wanted this container could also be a temporary service which would operate until we made the change in the original container. We could then simply re-route the message as required.
In practice there were a couple of occasions we had used this approach to fix a bug that had been identified and the temporary service allowed us to get a quick fix with minimal regression impact on other parts of the systems.
One of the things we had in the back of our minds when this architecture was put together was that if we could demonstrate some success with this approach we would be able to get some investment to potentially virtualize some of these services with products like SOA Software or Sentinet. At this point in time we would have considered ourselves to be a good couple of steps down a path of maturity in building services regardless of if you call then SOA, API or whatever. The key things we had were around standardization of our approach which gives us a great foundation to take to the next level. I felt if we then added a service virtualization platform on top of this we would have ticked most of the technical boxes for an SOA Platform covering technical approach, monitoring, governance and ALM. Even if we didn’t have the business lined up to “work the SOA” way we should still have a good amount of success and definitely be able to support an agile services evolution.
Hopefully at this point you can see the thinking behind out approach which will help when we go through the discussions of some real world examples in later articles in this series.