Ingress NGINX Migration Guide (2 of 3): Traefik Lands, Ingress NGINX Stays Safe.

The riskiest moment in an Ingress NGINX migration is not the cutover. It is the second between "we have to migrate" and "we have actually started". Teams either freeze (because the project feels enormous) or jump straight to the cutover (because they want it over with). Both produce outages.
There is a much better way: install Traefik in your cluster as a second controller, with its own IngressClass, and let it sit there. Your existing Ingress NGINX keeps serving everything. No traffic moves yet. Once Traefik is ready and validated, the third post walks through moving your Ingress resources to it one by one.
This is the second post of a 3-part series on migrating off Ingress NGINX, published alongside the Traefik v3.7 GA:
- Audit: inventory your annotations and translate the NGINX ConfigMap (read our guide here).
- Install (this post): deploy Traefik with Helm next to your existing NGINX controller.
- Progressive migration: cut over route by route, with zero downtime.
This post focuses on the Traefik installation only. Moving your Ingress resources from Ingress NGINX to Traefik, one at a time, is the topic of the third post. Here, we get Traefik deployed, configured with the new IngressClass, and validated against a duplicated test Ingress, sitting next to Ingress NGINX and ready to take over when you decide.
If you have not run the audit yet, do that first. This post assumes you already know which of your Ingress resources fall in the Vanilla, Supported, Unsupported, and Invalid buckets, and that your global ConfigMap has been translated to Traefik's static configuration.
Why Install Side by Side, Instead of Replacing in Place
The instinct, especially under deadline pressure, is to uninstall Ingress NGINX and install Traefik. Same IngressClass, same Service, same LoadBalancer IP. Done by lunchtime.
Don't.
When the new controller takes over the same traffic path, every misconfigured annotation, every forgotten ConfigMap setting, every silently different default becomes a production incident. The blast radius is your entire cluster. The rollback is a re-installation of the old controller, which takes time you do not have when 80 services are returning 503.
The side-by-side install eliminates this class of failure. You deploy Traefik in its own namespace, on a new, dedicated IngressClass (traefik-nginx). Your existing Ingress resources keep their nginx IngressClass. They keep flowing through Ingress NGINX, untouched. Traefik is in the cluster, configured, and idle until you explicitly point an Ingress at it.
More importantly, this separation is what makes a progressive migration possible. Once both controllers are running, you can move your Ingress resources from Ingress NGINX to Traefik one at a time: edit a single Ingress, change its ingressClassName from nginx to traefik-nginx, validate, move on to the next. No big-bang. No frozen weekend. Each route is its own atomic decision, with its own rollback (just flip the IngressClass back). The Traefik migration guide documents this as a blue-green pattern, and it is exactly what the third post will walk through. This post focuses on getting Traefik installed and ready for it.
Install Traefik With the Official Helm Chart
The traefik/traefik-helm-chart is the canonical install path. Start by adding the repo:
helm repo add --force-update traefik <https://traefik.github.io/charts>
Create a dedicated namespace for the new controller. Keeping it separate from ingress-nginx is good hygiene: it makes RBAC scoping, monitoring, and uninstall trivial later.
kubectl create namespace traefik
Now the values file. The key piece is providers.kubernetesIngressNGINX.enabled: true, with the new IngressClass and controllerClass that Traefik will own:
# traefik-values.yaml
providers:
# Read Ingress resources using the new traefik-nginx IngressClass.
# Existing nginx-class Ingresses are NOT touched: they keep flowing
# through ingress-nginx until you explicitly migrate them (the third post).
kubernetesIngressNGINX:
enabled: true
# New IngressClass that Traefik owns, distinct from ingress-nginx's "nginx".
ingressClass: "traefik-nginx"
ingressClassByName: true
# Traefik's own controller class for the new IngressClass.
controllerClass: "traefik.io/ingress-controller"
# Accept snippet annotations when migrated Ingresses use them.
allowSnippetAnnotations: true
# Standard Kubernetes Ingress provider disabled: we only consume the new class.
kubernetesIngress:
enabled: false
# Let the chart create the IngressClass that Traefik watches.
# This is the same IngressClass that the kubernetesIngressNGINX provider
# above is configured to consume. Not default-class: we do not want any
# Ingress without an explicit class to land on Traefik.
ingressClass:
enabled: true
isDefaultClass: false
name: "traefik-nginx"
# No Service for Traefik yet. The third post will repoint your existing
# ingress-nginx Service (which already has a LoadBalancer IP) to Traefik's
# pod selector. No need to provision anything extra to validate, port-forward
# to the pod is enough.
service:
enabled: false
# Dashboard for validation, reached via port-forward during install.
ingressRoute:
dashboard:
enabled: true
Install with:
helm install traefik traefik/traefik \\
--namespace traefik \\
--values traefik-values.yaml
That's the entire install. One Helm command, one values file. The chart deploys the controller and creates the traefik-nginx IngressClass in the same operation. The new controller is running, configured to watch a class that nothing in your cluster uses yet, with no LoadBalancer of its own. It is in the cluster, but invisible to production traffic.
Verify the pods are up:
kubectl -n traefik get pods
You should see one (or more) Traefik pod in Running state. No Service in the namespace. That is by design: the third post will reuse the existing Ingress NGINX Service by editing its selector.
Translate the ConfigMap Into Static Configuration
If your audit identified ConfigMap keys that need to come over (timeouts, body sizes, default backend, allow-cross-namespace, etc.), the corresponding Traefik options live under providers.kubernetesIngressNGINX in the same values file. The mapping is in the migration guide.
A typical example:
providers:
kubernetesIngressNGINX:
enabled: true
ingressClass: "traefik-nginx"
ingressClassByName: true
controllerClass: "traefik.io/ingress-controller"
allowSnippetAnnotations: true
# Translated from ingress-nginx ConfigMap
proxyBodySize: 16777216 # was proxy-body-size: "16m"
proxyConnectTimeout: 30 # was proxy-connect-timeout: "30"
proxyBuffering: true # was proxy-buffering: "on"
allowCrossNamespaceResources: true
globalAllowedResponseHeaders:
- X-Frame-Options
- X-Custom-Header
Apply with helm upgrade traefik traefik/traefik -n traefik -f traefik-values.yaml.
Cross-Cutting Keys: TLS Options as a Concrete Example
Some ConfigMap keys do not belong under the provider at all. The clearest case is TLS policy: ssl-protocols, ssl-ciphers, ssl-prefer-server-ciphers. In Ingress NGINX these are single cluster-wide knobs in the ConfigMap. In Traefik, they live in a TLSOption CRD that can be cluster-wide (named default) or per-Ingress.
Say your Ingress NGINX ConfigMap has:
data:
ssl-protocols: "TLSv1.2 TLSv1.3"
ssl-ciphers: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384"
ssl-prefer-server-ciphers: "true"
The Traefik equivalent is a single TLSOption named default, which Traefik applies cluster-wide to every route that does not specify its own:
# default-tls-options.yaml
apiVersion: traefik.io/v1alpha1
kind: TLSOption
metadata:
name: default
namespace: traefik
spec:
minVersion: VersionTLS12
cipherSuites:
- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
preferServerCipherSuites: true
kubectl apply -f default-tls-options.yaml
Two things to know when you do this translation:
- Cipher names. NGINX uses OpenSSL names (
ECDHE-ECDSA-AES128-GCM-SHA256); Traefik uses Go's crypto/tls names (TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256). The mapping is one-to-one but the strings are different. The TLS 1.3 cipher suites in Go are non-configurable, so they're omitted from the list. - TLS versions.
ssl-protocols: "TLSv1.2 TLSv1.3"becomesminVersion: VersionTLS12. Traefik will also accept TLS 1.3 unless you setmaxVersion. To explicitly cap at one version, set bothminVersionandmaxVersionto the same value.
The same pattern applies to the other cross-cutting keys: HSTS (hsts, hsts-max-age, hsts-include-subdomains) becomes a Headers middleware attached to an entryPoint, and PROXY protocol (use-proxy-protocol) becomes an entryPoint setting in the same traefik-values.yaml you used for install. The ConfigMap migration step of the guide lists the full mapping.
Validate With a Duplicated Ingress
Now the most important part: prove that Traefik would correctly serve your Ingresses before anyone reroutes traffic.
Because Traefik watches the new traefik-nginx IngressClass, it sees none of your existing production Ingresses. That is the safe default. To validate that Traefik handles your annotations correctly, pick one or two representative Ingresses from your audit and duplicate them into a test namespace with the new IngressClass. The original Ingresses stay on Ingress NGINX, untouched. The clones run on Traefik.
Pick something that exercises the annotations you care about, for example one with snippets, authentication, or rewrite rules. Then duplicate it like this:
# test/duplicate-whoami.yaml
apiVersion: v1
kind: Namespace
metadata:
name: app-prod-test
---
# Same Deployment + Service as in production, copied verbatim
apiVersion: apps/v1
kind: Deployment
metadata:
name: whoami
namespace: app-prod-test
spec:
replicas: 1
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami
image: traefik/whoami
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: whoami
namespace: app-prod-test
spec:
selector:
app: whoami
ports:
- port: 80
targetPort: 80
---
# Same Ingress as in production, with two differences:
# 1. ingressClassName: nginx -> traefik-nginx
# 2. host gets a -test suffix so it does not collide
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: whoami
namespace: app-prod-test
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/configuration-snippet: |
add_header X-Custom-Header "from-snippet";
spec:
ingressClassName: traefik-nginx
rules:
- host: whoami-test.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: whoami
port:
number: 80
kubectl apply -f test/duplicate-whoami.yaml
Now hit it through Traefik via port-forward:
kubectl -n traefik port-forward deploy/traefik 18000:80 &
curl -i -H "Host: whoami-test.example.com" <http://localhost:18000/>
You should see a 200 OK, the whoami response body, and the headers your annotations declared (in this example, X-Custom-Header: from-snippet). If the annotations land correctly here, they will land correctly in production once you move the real Ingress in the third post.
Repeat for two or three of your higher-risk Ingresses. The audit told you which ones deserve scrutiny; this is the place to scrutinize them. Once you are confident, delete the test namespace (or keep it as a permanent staging surface for future migrations):
kubectl delete namespace app-prod-test
Open the dashboard to see what Traefik knows:
kubectl -n traefik port-forward deploy/traefik 9000:9000 &
xdg-open <http://localhost:9000/dashboard/> # use `open` on macOS
In steady state, the dashboard will show only your test routes (since no production Ingress is in traefik-nginx yet). That is the whole point: the dashboard becomes a real-time witness of what you migrate, route by route, in the third post.
A Pre-Flight Checklist Before You Migrate
Before you move to the progressive migration (the third post), confirm:
- Traefik pods are healthy and running in the
traefiknamespace - The
traefik-nginxIngressClass exists, with controllertraefik.io/ingress-controller - Your test Ingress (or two) running with
ingressClassName: traefik-nginxresponds correctly through Traefik, with the expected annotations applied - Your ConfigMap-derived settings (timeouts, body sizes, allowlists) are in your values file and visible in the Traefik container args
- The Traefik dashboard is reachable via
kubectl port-forward - No production Ingress has been switched to
traefik-nginxyet. Everything that matters still flows through Ingress NGINX.
If every box is checked, you have done the safest thing possible: you have a fully functional Traefik controller, validated against duplicated copies of your real Ingresses, sitting in the cluster waiting to be used. Nothing has shipped to production yet.
What Comes Next
If you followed the steps above, everything is now in place for the migration. Traefik is installed, configured, and watching its own dedicated IngressClass. Your ConfigMap-derived settings are translated. Your TLS policy is in a TLSOption. You have curl-validated at least one duplicated Ingress and confirmed Traefik handles your annotations correctly. Production traffic still flows through Ingress NGINX, untouched.
That setup is not an accident. Every decision in this post (separate namespace, new IngressClass, no Service of its own, validation through a duplicate rather than the real Ingress, no DNS or LoadBalancer change yet) was made so that the actual migration in the third post is as boring and low-risk as possible. There is no big-bang to schedule. When you flip the IngressClass on your first real Ingress in the third post, you are doing something you have already proven works, with an instant rollback if anything is off.
The next post walks through that progressive migration: moving your Ingress resources from nginx to traefik-nginx one at a time, using a Traefik catch-all in front of Ingress NGINX to preserve zero-downtime guarantees, and a blue-green pattern that lets you roll back at any point with a single line change. With Traefik already installed and validated, the progressive migration stops being a project and becomes a sprint task.
Resources
- Helm chart: github.com/traefik/traefik-helm-chart
- NGINX Ingress provider reference: doc.traefik.io
- Migration guide: doc.traefik.io/traefik/migrate/nginx-to-traefik (and the ConfigMap migration section)
- Migration tool: github.com/traefik/ingress-nginx-migration



