Leveraging your Ingress Controller to easily migrate to Kubernetes

November 10, 2020
Leveraging your Ingress Controller to easily migrate to Kubernetes

Many enterprise organizations choose Kubernetes (k8s) as the foundation of their IT modernization efforts due to its alignment with cloud native practices. However, a question naturally arises during the adoption process: How should existing legacy applications be handled as part of a broader Kubernetes migration? As it turns out, the answer ties to the Kubernetes ingress controller, one of the core components in a Kubernetes cluster. In this article, we’ll delve into the question of migrating legacy applications by discussing the specific challenges these workloads pose and outlining a strategy to overcome them.

Legacy applications and Kubernetes

The functionality provided by legacy workloads typically encapsulates significant business value for organizations. Having been designed and implemented in an earlier era, they also tend to gravitate towards monolithic architectures powered by older programming languages and toolchains. For these reasons, they're often stagnant with little ongoing development other than to address high priority bugs or significant security vulnerabilities.

Here lies the dilemma: on the one hand, these legacy workloads are highly valuable. Modifying them in any way, including their operating environment, creates risks associated with the business's daily operation. On the other hand, leaving them out of a Kubernetes migration means maintaining older operating environments and preventing teams from reaping the benefits provided by Kubernetes with these critical applications. What's needed is a way for IT leaders to mitigate the migration risks associated with legacy applications.

Discussions around Ingress Controllers often arise as part of networking and routing in Kubernetes, particularly in connecting external users to applications. Due to their strategic placement in the overall architecture, in practice, their capabilities can extend benefits to use cases well beyond just connectivity. As we'll discuss in more detail, used effectively, Ingress Controllers can help ease the process of migrating and running legacy applications on Kubernetes by reducing or mitigating many of the risks that may otherwise prevent IT from making the transition process. To illustrate where Ingress Controllers fit into the overall migration picture, consider a high-level outline of a general migration strategy (we'll dive into each area next):

Migrating your Legacy Applications using Kubernetes Ingress
Migrating your Legacy Applications using Kubernetes Ingress
  • Deploy legacy workloads on Kubernetes - Get legacy applications running on k8s as simply and quickly as possible
  • Select an option for ongoing development / maintenance:
    • Build around a legacy application - Use Ingress Controller functionality to route traffic in a manner which allows for building on top of legacy without modifying it
    • Build your way out of a legacy application - Use Ingress Controller functionality to enable iterative refactoring of legacy

Lift and Shift: Deploy legacy workloads on Kubernetes

The first step of our migration strategy entails establishing a baseline for deploying legacy codebase running on Kubernetes. The goal is to help achieve a standardizing operating environment for all workloads and serve as a starting point for further improvements. To accomplish this, one must containerize the monolithic codebase and its associated dependencies. While there is no single recipe for containerization that will work across all applications, there are well-known items that need addressing as part of a "lift and shift" operation.

First, an appropriate Docker base image should be selected or defined for the legacy application. Depending upon the language and technology stack used in its implementation, there may be viable candidates available on Docker Hub. Otherwise, DevOps engineers will need to craft a custom image. Once the team establishes a base image, they leverage it to iterate on the monolithic application's candidate release images. In some cases, engineers will augment base images by injecting build artifacts. In others, it may be necessary to generate artifacts using the base image itself through multi-stage build processes. Once the containerized image is available, it can be deployed onto a Kubernetes cluster and validated.

Build around legacy applications with Ingress Controllers

Once the team establishes a baseline deployment, they have options for managing the future legacy workload. There will inevitably be a need to extend the monolith functionally, and this is where Ingress Controllers can help reduce complexity and risk. Specifically, instead of taking an approach where developers must modify or refactor the legacy applications, the core application can be left intact while using Ingress Controllers. This approach permits additional functionality by injecting new services that logically sit between end users and the monolith. Since developers are empowered to build these services from scratch, they are implementable using cloud native best practices. Traffic from external users routes to the intervening service layer by configuring the Ingress Controller for the cluster. When requests are received, the containerized legacy application operates as needed for specific functionality.

Build away from legacy applications with Ingress Controllers

An alternative approach towards realizing additional functionality around a legacy application once deployed on Kubernetes is to employ the so-called Strangler pattern. As may be apparent from the name, this strategy consists of replacing legacy codebases gradually by migrating features to new microservice implementations, which may also incorporate additional capabilities. Compared to a wholesale reimplementation, the overall risk spreads over time. In addition, if needed, teams can always fall back to the original implementation since it is left intact. The Ingress Controller is the key to enabling this strategy on Kubernetes as it allows operators to route traffic from external users to the refactored microservices versus the legacy application. As functionality continues to shift away from the monolith, it is "strangled" out, and eventually, the legacy application is ready to be removed from the cluster altogether.

Conclusion

For many enterprise organizations, legacy applications continue to support critical processes that form the business's backbone. Therefore, IT leaders need to understand potential strategies for handling these workloads during a Kubernetes migration.

In this article, we've reviewed how Ingress Controllers can significantly reduce the risk of legacy migrations while also enabling continued development around legacy implementations. While the directions outlined are available today with available Ingress Controllers, this area is also rapidly evolving within the Kubernetes ecosystem, as evidenced by its Service API evolution. Enterprises can safely assume that the ability to leverage resources such as Ingress Controllers to help ease migration challenges is only going to improve in the future.

Learn more about Traefik Enterprise today and learn how businesses are leveraging the power of enterprise-grade Kubernetes Ingress to solve their most demanding challenges.

Related Posts
Enhancing API Observability: Traefik Hub, OpenTelemetry, and the New Era of Data-Driven API Management

Enhancing API Observability: Traefik Hub, OpenTelemetry, and the New Era of Data-Driven API Management

Immánuel Fodor
·
API Management
·
December, 2023

Understand the importance of OpenTelemetry in API observability, how it can be used, and how Traefik Hub sets a new standard for OpenTelemetry support.

API Versioning with Traefik Hub: Smooth Transitions, Seamless Innovation

API Versioning with Traefik Hub: Smooth Transitions, Seamless Innovation

Immánuel Fodor
·
API Management
·
November, 2023

See why API versioning is important for businesses and how Traefik Hub does it differently to improve the experience for both API producers and consumers.

How to Install Traefik via Azure Marketplace

How to Install Traefik via Azure Marketplace

Nikolas Sachos
·
Kubernetes
·
June, 2023

This guide provides a simple and intuitive walkthrough on how to install Traefik Proxy and Traefik Enterprise through the Microsoft Azure Marketplace, granting you access to the many benefits of the Traefik product suite.

Traefik Labs uses cookies to improve your experience. By continuing to browse the site you are agreeing to our use of cookies. Find out more in the Cookie Policy.