Blog
November 30, 2025

The Hub-and-Spoke Ingress Pattern: Unifying EKS, ECS, and EC2 at Scale

Diagram of Traefik Hub unifying ingress operations across EC2, ECS, and EKS environments.

Large-scale AWS architectures rarely maintain homogeneity over time. They evolve through eras.

A service might begin life on Amazon EC2 because the team needed full control over the OS kernel. Years later, a new initiative embraces Amazon EKS for microservices. Meanwhile, other teams deploy onto Amazon ECS or AWS Fargate because the bursty, event-driven nature of their workloads demands it.

Individually, these are sound technical decisions. EC2 offers stability; EKS offers orchestration; ECS offers simplicity. The problem does not lie with the platforms themselves, but with how they are exposed to the outside world.

As these compute "islands" grow, they create a fractured edge. EKS uses Ingress controllers; EC2 relies on ALBs/NLBs with custom proxy logic; ECS uses ALBs mapped 1-to-1 with services. The result is operational incoherence: inconsistent authentication, fragmented routing rules, and a nightmare for anyone trying to debug a request that spans multiple environments.

In our latest technical AWS builders-style ebook, Unifying Ingress Across Distributed AWS Compute Environments, we detail an architectural pattern that solves this: the Hub-and-Spoke Ingress Model.

Here is a deep dive into how it works and how to implement it using the Traefik AWS Elastic Provider.

The Anatomy of Fragmentation

Organizations do not set out to build fragmented systems; fragmentation is a side effect of growth.

When you view these platforms from the inside, they make sense. But from the application edge—where users and clients interact with your system—the cracks begin to show.

  • Identity Fragmentation: EKS might validate JWTs via middleware, while EC2 relies on a totally different auth proxy.
  • Routing Complexity: Migrating a service from EC2 to ECS often requires DNS cutovers because there is no single routing plane that spans both.
  • Security Risk: To let a central ingress talk to backend services across VPCs, teams often widen Security Group rules or create permissive IAM roles, expanding the blast radius.

To solve this, we need an architecture that can see the entire system from the outside while interacting with each environment from the inside—without breaking isolation boundaries.

The Hub-and-Spoke Architecture

The solution is to decouple the global entry point from the local service discovery. We call this the Hub-and-Spoke model.

The Unified Ingress as the Hub

For the hub, you need a "Unified Ingress," a Traefik instance running on your primary compute platform (often EKS). It is responsible for high-level concerns: terminating TLS, validating identity (OIDC/JWT), applying global rate limits, and handling the routing logic.

The Compute Spokes

The spokes in our Hub-and-Spoke Model are lightweight Traefik instances deployed in remote environments—on EC2 instances, in ECS clusters, or in other EKS clusters.

Crucially, the Unified Ingress does not query the AWS API to find services. Instead, it polls the spokes. Each spoke inspects its local environment (via a Docker socket, the ECS API, or local tags) and exposes a sanitized list of available services via an internal API endpoint.

This separation of concerns yields a modular ingress fabric:

  1. The spokes are authoritative for their own environments.
  2. The Unified Ingress hub is authoritative for routing.
  3. Neither requires deep knowledge of the other's internal workings.

Why Polling? (The Security Argument)

In modern cloud-native development, "polling" is often considered a dirty word compared to event-driven architectures. However, in a multi-account, multi-VPC AWS environment, polling is actually the superior architectural choice for three reasons.

1. Eliminating Cross-Account IAM Complexity

If you use a centralized ingress controller that discovers services by querying the AWS API directly, that controller needs "God Mode" permissions. It needs to AssumeRole into every other AWS account to list instances and tasks. This creates a massive security risk—if the Ingress is compromised, the attacker can map your entire cloud estate.

With the Hub-and-Spoke Model, the Unified Ingress never touches AWS APIs outside its own account. It simply sends an HTTP request to the Spoke. The Spoke needs local permissions, but the Unified Ingress needs none. This aligns strictly with least-privilege principles.

2. Predictable Failure Modes

In distributed systems, simple failure modes are valuable. If a spoke goes down, the Unified Ingress stops polling it, times out, and removes those routes. When the spoke recovers, the routes return on the next interval. There is no "stale state" hidden in an event queue or a stream processor that stopped silently.

3. Tight Network Boundaries

Event-driven push models often require bidirectional connectivity. Polling is unidirectional. The Unified Ingress calls the spoke. The spoke requires no inbound rules beyond a single HTTPS listener on a specific port, accessible only from the Unified Ingress’s Security Group.

Solving the Identity Crisis

One of the most painful aspects of heterogeneous infrastructure is unifying authentication.

In a fragmented system, EKS might handle auth via an Ingress Controller, while legacy EC2 apps handle it in code. This makes it nearly impossible to enforce policies like "Only users with the admin claim in their JWT can access the /admin path on any service."

Because all external traffic flows through the Ingress, authentication becomes centralized by design.

  1. The Unified Ingress receives the request.
  2. The Identity Middleware validates the JWT against your OIDC provider (Cognito, Okta, etc.).
  3. The Unified Ingress extracts claims (Group, Tenant, Role).
  4. The Unified Ingress uses those claims to make routing decisions before forwarding the packet to the EKS, EC2, or ECS instances.

This transforms identity from a platform-specific headache into a consistent network primitive.

Cross-VPC Isolation

Many AWS environments segment workloads across multiple VPCs for compliance or to reduce the blast radius. Without a unified ingress, teams often resort to complex VPC Peering meshes or Transit Gateways with overly permissive security groups to allow east-west traffic.

The Hub-and-Spoke Model respects AWS’ "segmentation first" philosophy.

The Unified Ingress acts as the only cross-VPC touchpoint. Workloads in VPC A (ECS) never talk directly to Workloads in VPC B (EC2). They don't even need to know that the others exist. Security groups open only a narrow set of ports from the Unified Ingress’ CIDR range to the spoke’s Traefik instance.

This allows you to maintain strict network isolation while presenting a unified API surface to the internet.

The Traefik AWS Elastic Provider

While the Hub-and-Spoke is an architectural pattern, implementing it from scratch requires significant engineering. You need to build the spokes, standardize the metadata format, and build the reconciliation logic in the Unified Ingress.

This is why we built the Traefik AWS Elastic Provider.

It acts as the concrete realization of this architecture. It operationalizes the model by providing:

  • Unified Discovery Fabric: Spokes automatically enumerate local services (EC2 tags, ECS tasks, Kubernetes services) and normalize the data.
  • Resilience Logic: The provider handles the polling intervals, timeouts, and route eviction logic automatically.
  • Incremental Adoption: You can add a spoke to one ECS cluster today without touching your EKS or EC2 environments. The architecture evolves as you do.

When to Use This Pattern

This approach is not for everyone. If you are running a single EKS cluster in a single VPC, the AWS Load Balancer Controller is likely sufficient.

However, the Hub-and-Spoke Model offers substantial operational leverage for environments that exhibit:

  • Multiple Compute Platforms: A mix of EC2, ECS, and EKS.
  • Multi-VPC/Multi-Account: Strict segmentation requirements.
  • Migration Projects: Moving monoliths to microservices where traffic must be shifted gradually between platforms based on identity or weights.
  • High Governance: Environments requiring centralized audit trails for ingress traffic.

Conclusion: Consistency at the Edge

AWS will continue to evolve. New compute services will launch; existing ones will change. If your ingress strategy is tightly coupled to the underlying compute platform (e.g., "We use ALBs because we use ECS"), you will always be playing catch-up.

The Hub-and-Spoke architecture offers a principled path forward: a consistent edge atop an inherently diverse landscape. It allows you to choose the right compute tool for the job—EC2 for stateful legacy apps, ECS for batch jobs, EKS for microservices—without fragmenting the user experience.

It enables organizations to retain the advantages of heterogeneity while presenting a unified, reliable, and secure face to the world.

Read the Builders Guide

We have compiled a detailed AWS builders-style architectural guide including sequence diagrams for authentication flows, cross-VPC security group configurations, and failure recovery scenarios. Click the promo below to get your free copy.

About the Author

Zaid Albirawi is a Principle Solutions Architect at Traefik Labs. He spent 7 years as a software developer before moving into API-focused solutions architecture, a role he’s held for the last 5 years. He loves to code and enjoys tackling problems.

Latest from Traefik Labs

120 Days Until Ingress NGINX Dies: Traefik is the Only True Drop-in Replacement
Blog

120 Days Until Ingress NGINX Dies: Traefik is the Only True Drop-in Replacement

Read more
The Feature You Didn't Know You Needed: Multi-Layer Routing in Traefik
Blog

The Feature You Didn't Know You Needed: Multi-Layer Routing in Traefik

Read more
Unifying Ingress Across Distributed AWS Compute Environments
White Papers

Unifying Ingress Across Distributed AWS Compute Environments

Read white paper