Containers for Application Development- Part 2 – the Microservices Application Model

When last we left the container world, we had looked at the shifts that have brought the Container model to the front and center of every developers attention. But we have yet to look at where this fits into the Enterprise application development environment, and I believe we need to take one more detour to look at another technology shift before we can map any one Enterprises usage models to the containerized future that is the latest promised land. The specific technology is based on the latest evolution of the Cloud Native or Web Scale development paradigm, specifically the Microservices revolution.

The Rise of the Microservices

Much like Containers, Microservices are by no means new. In fact they have their origins in the first distributed computing systems developed in the 60s and 70s as part of the growth of the ARPAnet and it’s offshoots. I’d go so far as to say that e-mail managed via the SMTP process was one of the first microservices, and was a part of a multi-service architecture when mapped along side the Domain Name System (DNS).

Much like Containers, Microservices are by no means new. Click To Tweet

So what is a micro service then, and why are they such an important part of the current container revolution? Microservices are stand-alone application elements, that provide an Application Programming Interface (API) for integration with other micro and macro services to provide a useful function. One other aspect of a microservice is the fact that it’s only interaction interface is it’s API. If one wanted to send a message, a separate component, a mail client, is needed to actually manage the composition and insertion of a mail message, and a separate service, a local mail manager, was required to actually manage storing end-user messages.

Another component of the microservice model that mail prescribes to is the management of service interaction endpoints. For mail to be able to be distributed across an organization or across the world, a lookup of remote machines was needed, and this task, while possible as an embedded/static function, is best managed by a lookup mechanism, and the addressing mechanisms embedded into the e-mail address, being based on DNS, made the DNS service an obvious lookup tool. In this way, the mail service can communicate with other like-minded services by asking the DNS service what other mail system it can talk to in order to forward on the message to the recipient, and that communication happens over the same API/interface as the end user who initially asked the system to send the message on to a remote location.

The Importance of the Application Programming Interface (API)

So microservices provide single functions with an API (or set of APIs) that are the only ways in which a remote service interacts with the micro service. A number of benefits come out of this model when one builds macroservices with microservices. These include more manageable failure domains, better feature management, and more efficient development teams. Let’s review these major aspects one at a time.

Managing failure is certainly something that often plages large monolithic systems, especially when the subsystems expect to be able to connect deeply into other components without a consistent API boundary. This issue is classically expressed in a now mythic memo attributed to Jeff Bezos of Amazon.com, who laid down an edict “if you do not use well-defined APIs to communicate between services, you will no longer work at Amazon” or something to that effect.

The idea was that if someone only has to manage a database of the state of the book inventory, without also having to deal with displaying that inventory to end users, and providing access to the internal data structures to support other users of that data directly, a change in the data structure of the underlying book management data doesn’t break other users access to the general data, because they’re not trying to access the data directly, but only through a well-defined API that can manage change more effectively. This is even truer when the service API includes a guarantee of at least some level of backward compatibility, which allows all associated services some time to catch up to changes made in the microservices interfaces and internal state.

This API contract model is a key because of the boundary it places on change, access, and application expectations. In providing this boundary, it creates a failure domain management component that is easier to address in third-party applications. This is because the failure from the remote end is just an inability to acquire a response, rather than partially formed responses and all manner of other possible failures from a non-defined interaction with a remote service.

How APIs give microservice developer freedom

Feature management is perhaps less obvious at first, but the benefit is derived in a similar fashion to the failure domain model. By providing a definition of how a service is interacted with, new features can be developed and added to the API contract. And since that contract is the only interaction that third parties have with the microservice, the developer has freedom to modify and change the underly code to meet the functions needs, including improving performance, extending capabilities, etc. all while providing a consistent interface to its end users.

In addition, as new features are added, it is easier to migrate to an updated version because the interaction of third-party services is via the API only. And if a RESTful API is used (as it often is) web technologies like load-balancing can be used to support rolling upgrades and if necessary rollback as the service is deployed. This function also leads to a potential greater cycle rate for new feature development, as the full feature doesn’t necessarily need to be available to enable it in the platform, and allow others the ability to start to leverage the newfound capabilities.

Microservices boost developer efficiency

This leads to the third benefit of this model, namely that developers can become much more efficient with microservices based systems models. Once the ability to innovate and enable that innovation without long integration and validation cycles becomes the standard development approach (even if for only one out of many teams), the rate of change and acceleration of feature development comes along with the process. Certainly, there is work to be done in terms of proper DevOps and Agile development methodologies, but a tested production ready solution becomes the norm rather than the anomaly. And it is this “feature” of the microservices model that has developers so excited to get onto this class of platform. The ability to remove barriers to feature creation, delivery and adoption changes the picture of an application’s life cycle to one that is much more compelling to deliver.

developers can become much more efficient with #microservices based systems models Click To Tweet

And that wraps up our overview of the microservices model and some of the key reasons that it is so popular (at least in the press) as a way of shifting development to become more effective and efficient. While the benefits are clear, being able to migrate an enterprise full of applications that enable the business to a microservices based approach is not likely something that can be done overnight. Applications will need to be re-written, and some will need to be rebuilt from scratch to even begin to be able to fit into a service model of this nature, but once the transition starts, it should be possible to break monolithic services into more manageable components. Once this happens, it is possible to derive better business value from the fact that there are suddenly more available and accessible data sources that can be used to determine business direction and value, something any MBA would always welcome.

Read Part 3: Benefits of Containers for Enterprise Users

or if you missed Part 1, read on: Rise of the Container.