Skip to main content
Exploring what’s next in tech – Insights, information, and ideas for today’s IT leaders

9 tips for moving code to microservices

Your IT shop may aim to move to a distributed, containerized microservices architecture. But you don't want to throw away all your old code in the process. Here's what you should know before you get started.

Many shops are looking to break big existing monolithic applications into containerized microservices running on distributed architectures. Docker and Kubernetes are the most popular technologies that can make this happen, but others are sprouting up as containerization grows more popular, and you might soon find yourself looking to make this move.

There are many good reasons to do so as part of an ongoing digital transformation project. "The benefits are numerous—from increased team productivity to easier automated testing, rapid and flexible deployment models, and higher overall resiliency," says Ev Kontsevoy, CEO and co-founder of Gravitational, creator of the open source Gravity project, which converts complex cloud applications into single-file Kubernetes appliances.

But there are potential pitfalls too, Kontsevoy says, including "the need for careful planning and higher R&D investment upfront." Or, as Clouductivity founder Marcus Bastian puts it, "Migrating a monolithic application from a virtual machine or physical server to Docker [or some other containerized platform] is no trivial task. Most of the time, simply lifting and shifting isn't an option."

To help developers and IT operations make the move to microservices, we asked for advice from several tech pros who have experience porting monolithic applications to a shiny new microservices architecture. These nine tips can guide you as you begin the process.

1. Make sure everyone knows what they're getting into

Before you write a single line of code, make sure that everyone who signs off on the project is fully aware of the project scope. Some people aren’t conscious of what is entailed in moving to a containerized microservices architecture. Make sure that decision-makers pushing for this transition—whether they're your customers or your bosses—know what they're signing up for. One extra reason: These types of architectures are closely associated with DevOps, a philosophy that's become popular enough to attract trend chasers and cargo cult thinking.

"It's important to explain to the customer what it is exactly that we are going to deliver," says Miguel Murillo, a solution architect at Hewlett Packard Enterprise who recently helped port eIUM to a containerized architecture. "For instance, sometimes the customer knows about the new technology but is not aware of what the requirements for the new technology are. A customer with VMware might want to use containers but not understand the infrastructure changes needed to use them."

Sometimes you need to hash out the basics to set expectations properly. "Every customer seems to have a different opinion about what a microservice is," Murillo says. "So we need to explain what we mean by it and how we intend to implement microservices in our application. A customer might say, 'This is too big to be a microservice; it's not what I expected,' because they think of a microservice as a very, very small component, but that's not always the case."

2. Identify the downstream effects

Do your best to see things from the customer perspective—whomever that customer might be. This type of migration affects a large and presumably important application on which many people rely. As a result, you need to give extra attention to their needs, even if it means sacrificing a certain degree of theoretical architectural elegance.

This piece of advice applies to any big development project, of course. Jeffrey Lent, lead enterprise instructor at Galvanize, worked on a project to port "an ancient mainframe ERP" to a microservices architecture. The system provided the company with core financials and as well as customer relationship management (CRM) functions. One of the biggest challenges he encountered was maintaining user interface continuity for the call center people who used the CRM.

Lent and his team thought this would be a "quick win," so they tackled it first as a pilot project. "We began by replacing their green-screen interfaces with a web-based front end," he says. "This turned out to be surprisingly contentious because many of the call center users were used to interacting with the green screens using muscle memory while multitasking. We had to develop the web app to use keyboard shortcuts similar to the CICS interface in order to keep their usability in place.”

Lent concludes, “The takeaway is that you really need to listen to the voice of the customer on user experience and not make assumptions about the domain."

External end users aren't the only ones you're writing code for. Consider the effect on internal users. "My experience has taught me to think from the external perspective of developers and QA teams," says Daniel Bryant, product architect at Datawire. Those users need to test and work with parts of the system that, after the migration, are deployed on remote container orchestration clusters. "With the newly migrated system comprising more moving parts than the previous monolith— and with new technology like containers in the mix—the entire developer experience can change.”

He adds, “Tooling like the CNCF-hosted Telepresence application can help with remote-to-local development. And testing by using a proxy to 'virtually' place an engineer’s laptop into the cluster while they still have access to all of their local tooling, that enables fast feedback."

Over 1M people read enterprise.nxt. Are you one of them?

3. Work from the outside in

While you might dream of completely breaking down an unwieldy monolith into a happy mesh of individual microservices, that isn't always realistic. "Rarely do we have the fortune to just Dockerize applications and they run just fine," says Clouductivity's Bastian. "Sometimes we need to slowly break off chunks of a monolithic application into separate microservices that are built with Docker in mind."

One lesson Bryant learned from several monolith-to-microservices migration projects is to “focus from the perspective of "outside in." First, drive the initial steps of the migration from the perspective of the external user and associated functionality, as it's much easier to provide tangible (and demonstrable) end-to-end value here. For example, adding a new satellite microservice that provides a newsletter sign-up is a much more well-bounded project than something like 'extract the user subsystems from the monolith.' This approach helps to avoid getting stuck with the technical aspects of 'internal' system refactoring."

The goal for Lent’s project was "to peel off as much functionality as possible from the ERP, starting with CRM services and ending with core financials." As mentioned, his team started with the user interface. "Once that was in place, we slowly replaced bridge code that connected the front end to the mainframe with actual microservices that replaced mainframe functionality,” he says.

One IT faction believed that porting the core financials was achievable. “But many of us felt that there was a fair amount of hubris driving that assessment and that the complexities of meeting the compliance demands of a large public company was something that was better served by off-the-shelf software," Lent says. "I left the project before the whole refactor was complete. The CRM functionality and a bunch of services related to EDI got refactored into a microservice architecture, but at this point, the core financials are still running on big iron."

4. Pick the right language(s) for the job

Monolithic applications are generally built on a single language or platform, but that's not necessary for a microservices-based architecture—and sometimes it's not even advisable. "When breaking up a monolithic application, you don’t have to write all of the component pieces in the same coding language," says Shawn Moore, CTO and founder of Solodev. "With the cloud, you pick the best language for the application. For example, let’s say you’re converting your old-school PHP/.NET application into AWS Lambda functions. While Lambda does support PHP, using a language such as Python may be a better choice. Python was the original language written for Lambda. It has more code examples, deeper documentation, and a larger user base.

“And Python is actually fun to code with," Moore adds.

5. Watch where you store data

"Monolithic applications frequently contain code which uses the local disks to store state information and may also pull their configs from files, instead of from, say, environment variables," says Bastian. This is an important aspect in how to think about things that need to change in the transition to a microservices architecture.

"Assume that an instance of your application can disappear at any time, along with any temporary data associated with it—if the Docker scheduler for the orchestration platform you're implementing decides to move your container to another host in the cluster, for instance," he says. "Consider moving session data to something like Redis, Memcache, or another caching solution. This will enable all containers to use another service to access and share state information. It makes the data more resilient too, which is a huge perk."

Artem Aksenkin, a DevOps engineer at Belitsoft, suggests you rethink how your application treats all kinds of storage, not just session data. "If you use Docker, don’t place the microservice and the database into a single image," he says. "Create two separate files and two separate images. Docker is lightweight, so there's no need to negate this advantage with clunky structure."

This requires some upfront work on your part, but it isn't just an annoyance. As Gravitational's Kontsevoy points out, in the long run, this provides flexibility, which is among the advantages of a microservice architecture. "As your needs for CPU power and storage grow, you will find that these resources have very different scaling costs and characteristics," he says. "Not having to rely on local storage from the very beginning will allow you to adapt to future workloads with relative ease."

6. Keep tabs on APIs

In a microservices architecture, disparate components communicate via externally facing APIs. Those APIs become a critical part of the application. Since you'll be writing many of the APIs from scratch as you break apart your monolith, you need to get them right.

"As your microservices will be using each other’s APIs to communicate with each other, designing a schema for maintaining backward compatibility will be critical," says Kontsevoy. "A developer should not be demanding that everyone else upgrade their code simply by deploying the latest version of one microservice—this will be a step backward towards the monolith approach.”

He continues, "Development teams must agree upon a reasonable compromise between supporting old APIs forever and keeping a higher velocity of development. This also means that API design becomes an important skill. Frequent-breaking API changes is one of the reasons teams fail to be productive in developing complex microservices."

Don't panic: Tools can help. Javed Sikander, CTO at NetEnrich, recommends API management tools like Azure API Management or Apigee for securing, throttling, and error handling of microservices APIs.

7. Embrace asynchronicity

As Kontsevoy describes it, the traditional approach of software engineering involves "gradually building applications by adding more and more subroutines or objects that call each other. But that stops working as workloads grow and the application itself must be stretched across multiple machines or even data centers."

This complexity is one of the main incentives of moving to distributed microservices. It involves rearchitecting the application around the event-driven model: sending an event and not waiting for a result, instead of calling a function and synchronously waiting for a result.

One important tool for doing this is the message bus. "As your monolithic application gets broken into event handlers and event emitters, the need for a robust, performant, and flexible message bus is required," Kontsevoy says. Choose from the numerous options available based on application scale and complexity. “For a simple use case, something like Redis will do. On the opposite side of the spectrum is the ability to process events from multiple event sources, from streaming pipelines like Kafka to infrastructure and even monitoring events, if you need your application to be truly cloud-native and scale itself up and down."

8. Automate what you can

Automation is key to the DevOps philosophy as a whole, and it's a necessity as you break up your legacy monolith into microservices.

"If something can be automated, do it," says Vladimir Sinkevich, a Java architect at ScienceSoft. "Splitting an application into several components implies the existence of a number of different microservices with their own testing, deployment, and upgrade processes. It's necessary to keep track of each of these processes independently. Automated testing and CI/CD practices greatly assist in streamlining the development, testing, and deployment of a distributed app."

9. In some situations, rebuild rather than migrate

At Trulia, general manager Tim Correia recently oversaw a massive rearchitecture project that involved moving much of the company's functionality to a microservices architecture. And one lesson learned is that in some situations, it's best to start from scratch.

"Sometimes rebuilding, rather than migrating, is the best solution for realizing the full potential in performance and velocity gains," Correia says. "Early on, we tried to migrate some of our existing processes to microservices but realized we were trying to fit a square peg into a round hole. We stepped back and decided that reevaluating the current design of each service was the best approach. While it takes more time upfront, the end product is clear of technical debt, lowering our costs and saving resources."

Prepare to be rewarded

Daunting as your migration may seem, these tips can help you take the first steps. And though you're bound to encounter stumbling blocks, everyone we spoke to agreed the journey was worth it.

For HPE's Murillo, the increased ease of use at customer sites speaks for itself. While eIUM was originally OpenStack based, even the early OpenStack adopters are pushing to use containers. “Because, to be honest, containers are simpler,” Murillo says. “I can train people in a week to manage containers. To do the same with OpenStack, I need a month. To manage an OpenStack, you need a farm of servers. To manage containers, you have to use your laptop."

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.