Why Location Matters When Load Balancing Traffic in Containerized Environments

why location matters when load balancing traffic in containerized environments

Load balancing traffic in containerized environments is typically more complicated to handle than in more traditional environments. This is due to the nature of containers — the location at which containers are running can change rapidly, so things like network addresses and load balancers need to be constantly updated.

Not to mention that the location of the environment — whether it’s based on cloud providers or self-hosted — makes a massive difference in how load balancing is done.

In this article, I want to highlight the differences between environments and how to tackle them in the easiest way possible.

Expert Guide: Load Balancing High Availability Clusters with TraefikExplore key network configurations for on-prem environments.Download Now

Layers of networking

Networking in itself involves a lot of different techniques, protocols, and more. In a nutshell, networking is separated into seven different layers based on the OSI model. Load balancing, in general — and not only for containerized applications — usually takes place starting on Level 4 (L4), the Transport Layer, and up to Layer 7 (L7), the Application Layer.

In terms of containerized environments, it's a bit more complicated. Containers are short-lived and usually don’t provide a stable networking address. However, load balancing on L4 through L7 requires that the target provides a network address (i.e., an IP address) that can be used by the load balancing components.

There are several components that can work on either level based on their strengths and weaknesses.

academy advanced load balancing graph
Take your network to the next levelJoin our free Advanced Load Balancing Academy class and learn how to modernize and customize your load balancing with Traefik Proxy.Sign Up Today

Load balancing components

In order to successfully load balance traffic in containerized environments, there are multiple components involved. The two primary components are the load balancer and the reverse proxy, and you need at least one of them to load balance traffic successfully.

Load balancer

A load balancer takes incoming requests from the internet (typically either on L4 or L7) and forwards them to a given target. A load balancer acting on L4 is called a network load balancer — for example, the AWS NLB — while a load balancer acting on L7 is called an Application Load Balancer (ALB on AWS). A load balancer is capable of distributing the traffic based on a set of configurable algorithms (for example, using round robin). Based on the rules set by these algorithms, the load balancer distributes traffic to specified targets.

Reverse proxy

A reverse proxy builds on top of a load balancer. While a reverse proxy has similar functions to a load balancer — meaning it also distributes traffic to targets — a reverse proxy typically operates on Layer 7 only. On top of handling load balancing on L7, a reverse proxy can also enable functions like caching because it sits between the entry point of an incoming request and the actual applications. Since containerized applications are accessible and operate through L7, having a reverse proxy as part of your infrastructure is crucial in successfully load balancing traffic.

Why the location matters

In order to be able to load balance traffic to your containerized environment, you need to plan the architecture of your entire network architecture carefully. As load balancing traffic means getting traffic into your cluster, things like security matter greatly. Before making any decisions, consider the following questions:

  • Which ports do I actually need to open to the outside world?
  • Sure, default ports (80 for HTTP, and 443 for TLS) are the easy choice. But what applications will also need to be accessible and thus might require special ports?
  • Does my firewall actually allow me to open those ports?
  • Can I protect them with source IP ranges?
  • How is the inner cluster network made?

The answers to these questions will provide you with the right direction towards defining the requirements of your architecture and successfully load balancing traffic in your containerized environment. Answering these questions can be a walk in the park… turned into a terrible nightmare real fast, depending on where your environment is located!

Public cloud or on-prem?

Typically, cloud providers have a lot of existing (networking) technologies in place, which can be efficiently and effectively utilized. Utilizing existing technology can make it simpler for the end user, as they are able to abstract some components. An excellent example of this is an external load balancer provisioned directly in a cloud provider. For a variety of reasons, as seen already (opened ports, firewall rules, etc.), it's sometimes necessary to not make the reverse proxy inside a cluster directly accessible to the outside world, but rather have another layer of abstraction above it.

On the other hand, if you are running your environment, for example, on-premise or in a private cloud, there is typically zero to none of that automation existing and thus, end users have to manage more moving pieces by themselves.

How does cloud native technology help?

Containers' nature is ephemeral — they can be shut down from one second to another and brought back at a different location a couple of seconds later. Due to this fact, and in conjunction with the complexity of networking in general, there are a lot of challenges to overcome. For example, the fast-changing endpoint addresses of a container typically require a lot of manual effort to reconfigure the load balancing components constantly.

Cloud native technologies, like Traefik, were created to solve that issue. Traefik is capable of directly connecting to the containerized environment and listening to changes in that environment when containers join, move, or get stopped. Traefik then gets dynamically reconfigured, reducing the manual configuration effort.

Each containerized environment, whether it is Kubernetes, Nomad, or Docker (Swarm), has its unique set of challenges. However, a cloud native reverse proxy can help connect applications in those environments to the outside world seamlessly.

If you want to dive deeper into the inner working of successfully load balancing traffic in Kubernetes environments, I recommend reading our recent article Combining Ingress Controllers and External Load Balancers with Kubernetes.

expert guide cover
Expert Guide: Load Balancing High Availability Clusters with TraefikExplore key network configurations for on-prem environments.Download Now

Related Posts
Announcing Traefik Proxy 2.5

Announcing Traefik Proxy 2.5

Ryan McGuire
·
Product News
·
August, 2021

We are very happy to announce the general release of Traefik Proxy 2.5: the latest model of our capable, open-source, dynamic, cloud-native edge router, and application proxy.

Using Private Plugins in Traefik Proxy 2.5

Using Private Plugins in Traefik Proxy 2.5

Ryan McGuire
·
Ingress
·
September, 2021

Traefik Proxy is a modular router by design, allowing you to place middleware into your routes, and to modify requests before they reach their intended backend service destinations. Traefik has many such middlewares built-in, and also allows you to load your own, in the form of plugins.

Rate limiting on Kubernetes applications with Traefik Proxy and Codefresh

Rate limiting on Kubernetes applications with Traefik Proxy and Codefresh

Kostis Kapelonis
·
Kubernetes
·
October, 2021

If you're already using Kubernetes and you're looking to get a better understanding of rate limiting you're in the right place.

Traefik Labs uses cookies to improve your experience. By continuing to browse the site you are agreeing to our use of cookies. Find out more in the Cookie Policy.