From Zero to Hero: Getting Started with k0s and Traefik

Getting Started with k0s and Traefik

K0s is a new Kubernetes distribution from Mirantis. It's similar to Rancher Labs' K3s, yet it ships with only the bare minimum of extensions. This allows flexibility for users who want to customize it to their needs by defining their own ingress, storage, and other controllers in the CRD manifest, configuring the cluster during bootstrap.

In the examples below, I’ll guide you through how to accomplish getting a functioning Kubernetes cluster by:

  1. Installing k0s on a clean Linux VM
  2. Configuring Traefik and MetalLB as an extension
  3. Starting k0s
  4. Deploying the Traefik Dashboard IngressRoute and an example service

Step 1

Before we start, you should plan to do this on a clean install of Linux, probably in a VM. You will be running k0s as a server/worker, and the worker installs components into the /var/lib filesystem as root (so root access is a requirement). My understanding is there are plans to allow non-root workers in the future. Hopefully, in addition to non-root, the k0s binary will allow worker installations in a configurable location.

Note: Cleanly shutting down and wiping the cluster is not a feature yet in the k0s binary. For now, rebooting the system and wiping /var/lib/k0s is the easiest option.

Once you have a clean Linux VM (I’m using Ubuntu 20.04.1), you’ll want to install the Helm and kubectl binaries.

curl -O
tar xvzf helm-v3.4.1-linux-amd64.tar.gz
sudo mv linux-amd64/helm /usr/local/bin

curl -LO "$(curl -s"
chmod +x kubectl
sudo mv kubectl /usr/local/bin

Once those are installed, install the k0s binary, create the working directory for k0s, and create a default config.

Note: The installer and running k0s itself both require root
# make sure you're running as root
curl -sSLf | sh
# create the working directory and set the permissions
mkdir -p /var/lib/k0s && chmod 755 /var/lib/k0s
# create the default config
k0s default-config > /var/lib/k0s/k0s.yaml

Step 2

In this step, you’ll configure Traefik and MetalLB as extensions that will be installed during the cluster's bootstrap. Traefik will function as an ingress controller and MetalLB will allow you to access services from a logical IP address deployed as a service load balancer. You will want to have a small range of IP addresses that are addressable on your network, preferably outside the range of your DHCP server.

Modify the newly created k0s.yaml file in /var/lib/k0s/k0s.yaml:

kind: Cluster
  name: k0s
    - name: traefik
    - name: bitnami
    - name: traefik
      chartname: traefik/traefik
      version: "9.11.0"
      namespace: default
    - name: metallb
      chartname: bitnami/metallb
      version: "1.0.1"
      namespace: default
      values: |2
          - name: generic-cluster-pool
            protocol: layer2

Again, be sure to provide a range of IPs for MetalLB that are addressable on your network if you want to access the LoadBalancer and Ingress services from outside this machine.

Step 3

Now it's time to run k0s and let it automatically set up the server and worker, and deploy and configure Traefik and MetalLB:

cd /var/lib/k0s
k0s server --enable-worker </dev/null &>/dev/null &

After a minute or two, you should be able to access the cluster using the certificate generated by k0s, located in /var/lib/k0s/pki/admin.conf, and see that MetalLB was deployed along with the Traefik Ingress Controller.

root@k0s-host ➜ export KUBECONFIG=/var/lib/k0s/pki/admin.conf
root@k0s-host ➜ kubectl get all
NAME                                                 READY   STATUS    RESTARTS   AGE
pod/metallb-1607085578-controller-864c9757f6-bpx6r   1/1     Running   0          81s
pod/metallb-1607085578-speaker-245c2                 1/1     Running   0          60s
pod/traefik-1607085579-77bbc57699-b2f2t              1/1     Running   0          81s

NAME                         TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)                      AGE
service/kubernetes           ClusterIP        <none>           443/TCP                      96s
service/traefik-1607085579   LoadBalancer   80:32153/TCP,443:30791/TCP   84s

NAME                                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
daemonset.apps/metallb-1607085578-speaker   1         1         1       1            1    87s

NAME                                            READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/metallb-1607085578-controller   1/1     1            1           87s
deployment.apps/traefik-1607085579              1/1     1            1           84s

NAME                                                       DESIRED   CURRENT   READY   AGE
replicaset.apps/metallb-1607085578-controller-864c9757f6   1         1         1       81s
replicaset.apps/traefik-1607085579-77bbc57699              1         1         1       81s

Take note of the IP address assigned to the Traefik Load Balancer here:

NAME                         TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)                      AGE
service/traefik-1607085579   LoadBalancer   80:32153/TCP,443:30791/TCP   84s

You will need the EXTERNAL-IP (in this case, later, when accessing Ingress resources on your cluster.

Step 4

  • Deploy the Traefik dashboard
  • Deploy the sample “whoami” service

Now that you have a functional and addressable load balancer on your cluster, you can easily deploy the Traefik dashboard and access it from anywhere on your local network (provided that you configured MetalLB with an addressable range).

Create the Traefik Dashboard IngressRoute in a YAML file:

kind: IngressRoute
  name: dashboard
    - web
    - match: PathPrefix(`/dashboard`) || PathPrefix(`/api`)
      kind: Rule
        - name: api@internal
          kind: TraefikService

And deploy it:

root@k0s-host ➜ kubectl apply -f traefik-dashboard.yaml created

You can now access it from your browser by visiting


Great, now let’s deploy a simple “whoami” service.

Create the whoami Deployment, Service, and Kubernetes Ingress manifest:

apiVersion: apps/v1
kind: Deployment
  name: whoami-deployment
  replicas: 1
      app: whoami
        app: whoami
      - name: whoami-container
        image: containous/whoami
apiVersion: v1
kind: Service
  name: whoami-service
  - name: http
    targetPort: 80
    port: 80
    app: whoami
kind: Ingress
  name: whoami-ingress
  - http:
      - path: /whoami
        pathType: Exact
            name: whoami-service
              number: 80

And now, deploy and test it…

root@k0s-host ➜ kubectl apply -f whoami.yaml
deployment.apps/whoami-deployment created
service/whoami-service created created
# test the route
root@k0s-host ➜ curl
Hostname: whoami-deployment-85bfbd48f-7l77c
IP: ::1
IP: fe80::b049:f8ff:fe77:3e64
GET /whoami HTTP/1.1
User-Agent: curl/7.68.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Server: traefik-1607085579-77bbc57699-b2f2t


This post covered installing k0s, setting up a fully functional Load Balancer and Ingress controller for use in your local environment. From here, you could use a tool such as ngrok to expose your Load Balancer to the world and set up Let’s Encrypt so you can provision your own SSL certificates.

The design of k0s as a single binary installer that allows modular customizability makes it a unique offering in the Kubernetes community. You can learn more about how to leverage Kubernetes Ingress with Traefik on our site. In addition, you can learn more about installing k0s on Mirantis' blog. While k0s is still relatively new to the scene, I hope this post gives you an idea of what it’s capable of and how you can start experimenting with your own customized Kubernetes setup.