Solving the Microservices Challenge Through Open Source Collaboration
Organizations are now seeking tools to easily connect, manage and secure networks of microservices.
December 19, 2017
By Jason McGee
Building with microservices is particularly well-suited for developing applications that can be continuously updated and iterated on in the cloud. However, microservices also introduce new challenges for development teams, such as security, traffic management, and operational complexity.
As developers move to microservices, they create many moving parts. Unless they are managed well, this increased complexity can become overwhelming – especially when compared to traditional approaches: as more moving parts need to scale together, the number of possible failure points increases.
Working with microservices requires a whole new level of versatility and flexibility. Services and tools can often be bound tightly to a particular language, such as Java, but enterprise developers and IT teams often need to work with a wide variety of languages across disparate teams.
As organizations recognize these challenges, they are now seeking tools to easily connect, manage and secure networks of microservices.
Existing Platforms for Handling Microservices are not Enough
Let’s start with some background on microservices. Building with microservices is a way of decomposing monolithic apps into modules for each function or element of the app, which are independently developed, deployed and operated.
Think of a honeycomb. The hexagonal architecture is made up of independent cells, but as more cells are added, it grows organically to a solid structure. Much like the structure of a honeycomb, the microservice architectural style makes each element of functionality into a separate service, and scales by distributing these services across servers, replicating as needed.
The cloud is an important part of the microservices conversation, as it’s quickly becoming the de facto standard for deploying new and modified applications built with a microservices architecture. When developer teams follow this approach, the app ibecomes cloud-native and takes advantage of all of the facilities provided by the cloud, such as elastic scaling, automated deployments and disposable instances.
But, as individual development teams manage and change their microservices, it becomes difficult to keep all of the pieces working together as one, even with the cloud. Enterprises can build custom solutions to these challenges, but these are often unable to scale outside of their own teams.
The First Move: Make a Shift to a Service Mesh
Service mesh architecture is one way to tackle this challenge. Service mesh is a software layer that decouples the communication between microservices, and it’s quickly become an integral part of microservice projects. In the past year, service mesh has emerged as a critical component of the cloud native stack, with companies such as Lyft using a service mesh as part of their production applications.
But, what does the service mesh actually do?
Think of a service mesh as a network of interconnected devices with routers and switches. Except in this case, the network exists at the application layer. The goal is to get a request in a reliable and timely manner across this mesh from microservice to microservice, such as the request to analyze a picture with visual recognition once a photo storage service recognizes an upload.
Even simple apps can span hundreds of microservices, so teams need a mechanism that is fault-tolerant, as well as something that provides more visibility into and control of a complex network. And as large, more complicated apps are decomposed into microservices, software teams have to account for challenges such as security, service discovery, compliance, and end-to-end monitoring. Attempts at solving these challenges, cobbled together from libraries and scripts and Stack Overflow snippets, can lead to solutions that have poor observability characteristics, and can often end up compromising security.
The Second Move: Implement a Service Mesh Using Open Source Tools
Service mesh architecture was created to ensure reliable, secure and timely communication between microservices. But as microservices become the defacto choice for developers, they need a solution that implements the service mesh architecture they need.
While many cloud platforms are great for deploying microservices, they were created with a view of simplifying app deployment across multiple runtimes. Similarly, while many container orchestration services can handle multiple container-based workloads, including microservices, they still need help when it comes to handling sophisticated features, such as traffic management and failure handling.
Developers need help keeping track of the traffic flow between microservices, routing traffic based on requests or traffic origination points, and handling failures in a graceful manner when microservices are not reachable. More importantly, developers need to do all of this without changing the application code.
Tech companies such as IBM, Google and Lyft have recognized the need for open source tools that can address these challenges, without requiring any changes to the actual apps. Each of these companies recently addressed separate, but complementary, pieces of this problem using service mesh architecture as a foundation.
The approach they took was to combine forces and develop the open source technology Istio as a means to support traffic flow management, access policy enforcement and aggregate telemetry data between microservices. Istio does all of this without requiring developers to change their application code.
Through the collaborations of these three leaders, Istio converts disparate microservices into an integrated service mesh by introducing programmable routing and a shared management layer. It uses a sidecar pattern where sidecars are enabled by the Envoy proxy which is Lyft’s open source edge and service proxy developed to aid their own microservices journey. By injecting Envoy proxy servers into the network path between services, Istio provides sophisticated traffic management controls, such as load-balancing and fine-grained routing.
Problem-Solving: It Takes a Village
Istio is just one approach to addressing the microservices challenge. It’s not targeted at any one deployment environment, but as it currently stands, Istio supports Kubernetes-based deployments and virtual machines. Soon, it will enable rapid and easy adaptation to other environments, such as Cloud Foundry. And since it’s a project completely in the open, developers can continue to implement microservices, without vendor lock-in.
But the bigger lesson is that Istio’s development wasn’t the result of just one company open sourcing its code. It emerged because industry players took collaborative, community-focused actions to address critical developer challenges around building and operating microservices.
The true beneficiaries are development teams, which will have more access to not only an active community, but also to a host of open collaborations and resources to help them tackle their own microservices challenges moving forward.
You May Also Like