Blog

Bench Testing OpenID Connect Authentication in Traefik Enterprise

Traefik Enterprise is a flexible ingress and API gateway that solves a variety of networking challenges in the cloud native stack. One of the most common reasons users opt for the enterprise version of Traefik Proxy is to take advantage of the suite of authentication middlewares. Of these, the OpenID Connect (OIDC) middleware is an increasingly prevalent choice for customers.

OIDC is an open authentication protocol that allows clients to verify the identity of end users requesting access to endpoints from authorization servers. It is an integral part of securing distributed systems.

The OIDC middleware in Traefik Enterprise is also powerful in part because of its flexibility. With it, users can delegate authentication to any Identity Provider (IdP) that follows the OIDC standard. In addition, the middleware can check the claims returned from the IdP ID token to make sure users are authorized to access the requested service. By default, once authenticated, session data is stored in a cookie stored by the client. While many users use this approach to successfully centralize authentication and authorization, performance can be heavily dependent on the IdP. For organizations with heavy session data (with lots of claim data, for example), the session cookie can grow too large and introduce unwanted latency to subsequent requests to the backend service.

To combat this, Traefik Enterprise v2.6  introduced the option to store OIDC session data in a key-value (KV) store instead of a cookie. This option offloads the session data to the KV store and keeps only a session identifier in a client-side cookie, resulting in a much more lightweight approach.

Webinar: Centralize OIDC Auth at the API Gateway Level Learn why API gateways simplify authentication and how to configure and manage OpenID Connect using Traefik Enterprise.Watch the Webinar

With this option introduced, our team performed some basic bench testing in a sandbox environment to compare the effects of Redis, cookies, and no authentication on their latency. In this blog post, I will walk you through how to do so.

Setting up the test environment

I'll perform my testing with Traefik Enterprise running in Kubernetes (but the test could be redone in other infrastructure types). To install Traefik Enterprise, I’m using the official Helm chart:

helm repo add traefikee https://helm.traefik.io/traefikee
helm repo update

helm upgrade --install traefikee traefikee/traefikee \
  --namespace traefikee --create-namespace \
  --values values.yaml

kubectl create secret generic $CLUSTER_NAME-license --from-literal=license="$TRAEFIKEE_LICENSE" -n traefikee

kubectl create configmap --from-file=static.yaml $CLUSTER_NAME-static-config -n traefikee

For the Helm install command, the values.yaml file should contain the following:

cluster: $CLUSTER_NAME

image:
  name: traefik/traefikee
  tag: v2.6.1
  pullPolicy: IfNotPresent

controller:
  replicas: 1
  staticConfig:
    configMap:
      name: $CLUSTER_NAME-static-config
      key: static.yaml

proxy:
  replicas: 2
  serviceType: LoadBalancer
  ports:
    - name: web
      port: 80
    - name: websecure
      port: 443

The static config ConfigMap deployed in the last command looks like this:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: $CLUSTER_NAME-static-config
  namespace: traefikee
data:
  static.yaml: |
    entryPoints:
      web:
        address: ":80"
      websecure:
        address: ":443"

	certificatesResolvers:
      le:
        acme:
          email: $LE_EMAIL
          tlsChallenge: {}

    providers:
      kubernetesCRD: {}

    authSources:
      oidcSource:
        oidc:
          issuer: "https://keycloak.$DOMAIN/auth/realms/$REALM"
          clientID: "$CLIENT_ID"
          clientSecret: "$CLIENT_SECRET"

    sessionStorages:
      redisStore:
        redis:
          endpoints:
            - "redis-master.redis.svc.cluster.local:6379"
          username: "redis"
          password: "$REDIS_PASSWORD"

Next, I deploy a KV store for testing purposes. For this walkthrough, I'm using Redis, but any of the supported KV stores could also be used. I'm using Bitnami’s Redis Helm chart for installation:

helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update

helm upgrade --install redis bitnami/redis -n redis --create-namespace

Next, I deploy an IdP server into the Kubernetes cluster. For the purpose of this article, I'm using Keycloak, also using Bitnami’s relevant Helm Chart:

helm upgrade --install keycloak bitnami/keycloak -n keycloak --create-namespace \
	--set service.type=ClusterIP --set proxyAddressForwarding=true

To expose my Keycloak server for the OIDC authentication flow, I use the following IngressRoute:

---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: keycloak
  namespace: keycloak
spec:
  entryPoints:
    - websecure
  routes:
    - kind: Rule
      match: Host(`keycloak.$DOMAIN`)
      services:
        - name: keycloak
          port: 80
  tls:
    certResolver: le

Note: using Keycloak here also requires configuring an authentication realm and demo client after installation, the process for which will not be detailed in this blog post.

Finally, I need to deploy an example application to run as my backend service. For this, I'm using Traefik’s whoami app, which includes a /bench endpoint useful for these sorts of tests.

---
apiVersion: v1
kind: Namespace
metadata:
  name: whoami

---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: whoami
  namespace: whoami
spec:
  replicas: 3
  selector:
    matchLabels:
      app: whoami
  template:
    metadata:
      labels:
        app: whoami
    spec:
      containers:
      - name: whoami
        image: traefik/whoami
        imagePullPolicy: IfNotPresent

---
apiVersion: v1
kind: Service
metadata:
  name: whoami
  namespace: whoami
  labels:
    app: whoami
spec:
  type: ClusterIP
  ports:
  - port: 80
    name: whoami
  selector:
    app: whoami

Finally, I create the required IngressRoute and Middleware objects for my test cases:

---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: whoami
  namespace: whoami
spec:
  entryPoints:
    - websecure
  routes:
    - kind: Rule
      match: Host(`whoami.$DOMAIN`)
      services:
        - name: whoami
          port: 80
      middlewares:
        # - name: oidc-cookie
        # - name: oidc-redis
	tls:
        certResolver: le

---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
  name: oidc-cookie
  namespace: whoami
spec:
  plugin:
    oidcAuth:
      source: oidcSource
      redirectUrl: "/callback"
      session:
        secret: mysupersecret123

---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
  name: oidc-redis
  namespace: whoami
spec:
  plugin:
    oidcAuth:
      source: oidcSource
      redirectUrl: "/callback"
      session:
        secret: mysupersecret123
        store: redisStore

Testing — code and control variables

For performing the actual tests, I'm using vegeta, a popular load testing tool. I set up three levels of the independent variable:

  • No OIDC Middleware (the control group)
  • OIDC Middleware storing session data in a cookie
  • OIDC Middleware storing session data in Redis

These three levels are reflected in the following commands:

# control
echo "GET https://app.$DOMAIN/bench" | vegeta attack -duration=$DURATION -rate=$RATE | tee results/direct.bin | vegeta report

# cookie
echo "GET https://app.$DOMAIN/bench" | vegeta attack -header "Cookie: oidcSource-session1=$OIDC_SESSION_1; oidcSource-session2=$OIDC_SESSION_2" -duration=$DURATION -rate=$RATE | tee results/cookie.bin | vegeta report

# redis
echo "GET https://app.$DOMAIN/bench" | vegeta attack -header "Cookie: oidcSource-session=$SESSION_ID" -duration=$DURATION -rate=$RATE | tee results/redis.bin | vegeta report

For each of the cases using the OIDC authentication middleware, I first need to navigate to my application URL in a browser to authenticate. After successful authentication, the cookie header data can be retrieved using browser developer tools and then used from the command line.

To ensure consistency, the following control variables should be uniform across all levels:

Results and suggested further testing

Before discussing results, an important caveat should be mentioned: performance bench testing like this is highly dependent on a number of variables, including infrastructure type, node sizing, hosting location, IdP, and backend application. Thus, the method discussed here should be the primary focus, not necessarily the results from this testing.

With that disclaimer out of the way, in the testing I presented here, I unsurprisingly found the control group without any authentication to have the lowest latency. Storing session data in Redis seemed to generally have the second-lowest latency, followed by storing session data in a cookie. It’s likely that the difference between these two storage options increases as claim data (and thus cookie size) increases.

To expand testing, it would be interesting to manipulate the rate and duration variables across rounds of testing, to see how each approach scales with increased load. Additionally, in this bout of testing, I assumed default resource requests/limits for each of the components — it’s possible that performance results could be impacted by tuning the options of the various tools involved.

Finally, I’d encourage you to try and replicate these tests within your own environment to get an individualized perspective on the difference between the options.


Interested in using Traefik Enterprise within your specific network setup? Try Traefik Enterprise free for 30 days, or reach out to schedule a conversation with the Traefik Labs team.

Secure, manage, & scale all your APIs.See how Traefik Enterprise simplifies, automates, and centralizes API management and security with one easy-to-use solution.Learn More

Traefik Labs uses cookies to improve your experience. By continuing to browse the site you are agreeing to our use of cookies. Find out more in the Cookie Policy.