Transform business with microservices and containers
The history of computing is punctuated by a set of seismic shifts in enterprise IT architectures.
Monolithic, highly integrated applications moved to integrated software stacks and N-tier architectures. Distributed computing has also gone through several incarnations. There have been multiple attempts at standardising inter-application communications, such as remote procedure calls on Unix, distributed object model, common object request broker architecture and web services. All have tried to promote code reuse, and publish and share application programming interfaces (APIs) in a bid to avoid programmers having to “reinvent the wheel”.
Now, thanks to the success of Docker and Kubernetes, more businesses are looking at deploying containers. The reason for the popularity of this approach is that it helps businesses develop cloud-native applications, which can be delivered quickly to power digital transformation initiatives.
Forrester’s Now tech: enterprise container platforms, Q2 2018 report notes: “Container-centric, microservice-oriented, dynamically orchestrated cloud-native technologies help firms create highly differentiated apps and services that create compelling customer experiences. They’ve quickly become important elements of digital business transformation as they promise to speed software delivery and improve scale, resiliency, flexibility and implementation.”
Moving to an agile approach with containers
Red Hat OpenShift is one of the major enterprise container platforms identified in the Forrester report. Global information analytics business Elsevier is among the companies using the Red Hat product as it digitises its business.
Like many organisations, Elsevier began with an SOA and used containers as a way to make software development more agile.
Tom Perry, director of software engineering at Elsevier, says the company began with a traditional SOA, which did not support the business very well. “When I joined in 2015, we were using a half-baked SOA architecture. It was not very structured and it was proprietary,” he says.
This meant it was difficult for the software teams at Elsevier to build reusable components, which slowed down the pace of change. “We had a monolithic application – a jack of all trades – and it was a big move any time you wanted to adapt it,” Perry adds.
At the time, the company was in the process of changing from selling content to selling services on top of content. As well as the shift in the business, Elsevier was also shifting its approach to IT, closing its datacentre and moving on to the Amazon Web Services (AWS) cloud instead.
Perry says he wanted an architecture that would work with how the business was evolving. “Instead of looking at how applications interact, we wanted data access across end-to-end business processes,” he says. To achieve this, Elsevier needed a loose coupling between internal and software-as-a-service (SaaS) systems such as Salesforce.
Given that container platforms are constantly evolving, Perry says Elsevier initially tried Red Hat Fuse to start migrating from SOA to more of a hybrid container architecture. However, he says: “We could see where things were going, but the technology was no way near mature enough for the enterprise.”
As well as the technology constantly evolving, Elsevier also had to go through a learning curve. One of the lessons learnt from the company’s initial attempts at deploying containers was that the APIs and services being exposed required a lot of security. “We should have decoupled security,” Perry adds.
If Red Hat Fuse was not going to work, what else? While it is possible to build complete enterprise cloud platforms from open source components such as Kubernetes, Perry points out that Kubernetes is just the bit in the middle. “You need to build services around Kubernetes,” he says. Elsevier wanted a single product, so it selected the Red Hat OpenShift Container Platform. “We got an all-in-one platform, which allows us to pick up our code and deploy it to another platform,” adds Perry.
The first system to use the new platform was the company’s marketing and advertising system, which used both on-premise software and SaaS. Describing the setup, Perry says: “We looked at providing access to enterprise data and exposing a set on enterprise API for future reuse.”
Unlike the company’s attempt with Red Hat Fuse, he says the architecture is based on microservices running in containers. These only perform logic functions, so there is no additional security overhead to worry about. ”We use an API gateway to manage security so the services do not need to care about security,” Perry adds.
Beyond his experience with integrating security into the APIs, Perry believes containers are not suitable for all types of applications and workloads. “It is not always the right choice to use containers,” he says.
An example of when not to use containers includes when attempting to run an application server or database server in a container, which will involve monolithic code. According to Perry, there are no big benefits gained by trying to containerise these monolithic applications as heavyweight services.
Another takeaway from the use of containers at Elsevier is that not every part of the business is ready for cloud-native computing. Agile development methodologies are often associated with a cloud-native approach to application development.
Although Elsevier has starting using agile approaches in some of its development projects, Perry adds: “There are different speeds across the organisation. Some services can work in an agile way, but others, like our Oracle eBusiness Suite, cannot.”
Allocating cost is another problem area for IT, in Perry’s experience. “We haven’t quite cracked how we apportion cost in the area of integration across a shared function,” he says.
Challenges of compliance
DevOps generally goes hand-in-hand with agile software development methodologies, giving teams freedom to develop and deploy code quickly. But in a cloud-native architecture built of containers running microservices, speed and agility are not without risks, according to Jonathan Hotchkiss, head of cloud service reliability engineering at money lending service, WorldRemit.
As the company built out its serverless payment system using microservices, the ability to understand everything that was going on became increasingly difficult, says Hotchkiss. “Unless done correctly, money is wasted doing DevOps because systems are built and forgotten about, or code is scaled up beyond its ability using the cloud,” he says, meaning that, in effect, that code is not written to scale efficiently
He says the company’s original platform began as a classical web e-commerce architecture using a monolithic database. “It was not the best architecture for scalability so we started to pull out parts and develop them as microservices, using the Azure PaaS [platform as a service],” he adds.
Unfortunately, WorldRemit was unable to fully document all the microservices being developed. This was partially down to team culture. The company’s software development teams were transient, with teams lasting between 18 and 24 months. “No one team had a full idea of all the microservices. We didn’t know how it all worked,” adds Hotchkiss.
WorldRemit selected Dynatrace to provide one dashboard, which provided an AI-driven topological view of the all components of the system. “When something is broken, Dynatrace highlights what is and is not good and gives us an intelligent answer of why there’s a fault,” says Hotchkiss.
Balancing compliance with giving DevOps teams the freedom to work effectively is always a challenge. As WorldRemit found out, without some level of control – such as the need for teams to document their work thoroughly – cloud-native architecture can quickly become unmanageable.
Strict enforcement of corporate rules, procedures and policies can limit flexibility, but sometimes it may be better to encourage the use of preferred tools and frameworks using best practice communities.
For instance, Porsche Informatik has established communities of engineers who promote best practice that are then fed back into the DevOps teams. Porsche Informatik tries to make its tools and frameworks as easy to use as possible so they become the first choice for DevOps teams.
Going cloud native Speaking at the New Relic FutureStack event in London, James Governor, co-founder of analyst RedMonk, discussed how cloud-native architectures changed how applications are debugged. At the time, he said: “[We must] build applications in a way they can be effectively managed. We are moving to an environment when we have to debug in production, which requires observability.” According to Governor, tracing, logging and application performance monitoring (APM) are being harnessed to deal with problems in production code. From the businesses Computer Weekly has spoken to, it seems they are learning how to develop in a world of serverless computing, microservices, containers and DevOps. If Governor is right, more businesses will need to adapt to begin testing code across their live production environments. As WorldRemit found, it is necessary to understand how microservices are evolving in the business. And while tooling and the programming languages and frameworks that developers adopt may well come down to personal choice, having key members of the DevOps team involved in best practices – as is the case at Porsche Informatik – can help to cement preferred standards and tools in projects.