Skip to content

k3s — Lightweight Kubernetes

k3s is a certified, production-ready Kubernetes distribution packaged as a single binary (~100 MB). It is optimized for VPS, edge, and IoT environments.

Why k3s

FeatureValue
Single binaryNo Docker daemon required
Built-in containerdReduced resource footprint
ARM + x86 supportWorks on any VPS
Certified Kubernetes100% API compatible
Automatic TLSInternal cluster PKI included

Architecture

┌─────────────────────────────────┐    ┌─────────────────────────────────┐
│          Server (VPS 1)         │    │          Agent (VPS 2)          │
│                                 │    │                                 │
│  k3s server                     │    │  k3s agent                      │
│  ├─ kube-apiserver  :6443       │◄───│  ├─ kubelet                     │
│  ├─ etcd (embedded)             │    │  ├─ kube-proxy                  │
│  ├─ controller-manager          │    │  └─ containerd                  │
│  └─ scheduler                   │    │                                 │
│                                 │    │  Flannel VXLAN  :8472/udp       │
│  Traefik (ingress)  :80/:443    │    │  Kubelet API    :10250/tcp      │
│  cert-manager                   │    │                                 │
└─────────────────────────────────┘    └─────────────────────────────────┘
           │                                         │
           └─────────── Flannel VXLAN ───────────────┘
                        (pod overlay network)

Prerequisites

RequirementNotes
Ubuntu 22.04+ / Debian 12+Both nodes
2 vCPU / 2 GB RAM (server)1 GB minimum for agent
Public IP on each VPSRequired for TLS SAN + UFW rules
SSH access as root or sudo userBootstrap uses INITIAL_USER=root
Port 6443 open on serverk3s API server
Ports 80, 443 open on serverHTTP + HTTPS traffic

Install flags explained

The k3s server Ansible role (ansible/roles/k3s_server) installs k3s with these key flags:

bash
curl -sfL https://get.k3s.io | \
  INSTALL_K3S_VERSION="${K3S_VERSION}" \
  K3S_TOKEN="${K3S_NODE_TOKEN}" \
  sh -s - server \
    --disable=traefik \          # Managed via Helm for full control
    --disable=servicelb \        # No built-in LB — use externalIPs instead
    --node-ip="${NODE_IP}" \     # Internal NIC IP (Flannel overlay)
    --advertise-address="${NODE_IP}" \
    --tls-san="${PUBLIC_IP}" \   # Public IP added to API server certificate SAN
    --flannel-backend=vxlan \    # Stable VXLAN overlay (UDP 8472)
    --protect-kernel-defaults \  # Enforces sysctl requirements
    --secrets-encryption \       # Encrypts Kubernetes Secrets at rest
    --write-kubeconfig-mode=600  # Restrict kubeconfig permissions

--tls-san is critical — without the public IP in the TLS SAN, your local kubectl will get a certificate error when connecting remotely.


Sysctl requirements

k3s requires specific kernel parameters. These are written to /etc/sysctl.d/99-z-k3s.conf:

ini
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
net.ipv4.conf.all.forwarding        = 1
vm.panic_on_oom                     = 0
vm.overcommit_memory                = 1
kernel.panic                        = 10
kernel.panic_on_oops                = 1

The 99-z- prefix ensures these values are applied after any hardening configs (e.g. 99-security.conf), so ip_forward=1 is the final value.


Firewall rules (UFW)

Server node

PortProtocolPurpose
80TCPHTTP (Traefik + ACME HTTP-01 challenge)
443TCPHTTPS (Traefik TLS termination)
6443TCPKubernetes API server
10.42.0.0/16anyk3s pod CIDR (Flannel)
10.43.0.0/16anyk3s service CIDR
8472 from AGENT_IPUDPFlannel VXLAN tunnel
10250 from AGENT_IPTCPkubelet API

Agent node

PortProtocolPurpose
8472 from SERVER_IPUDPFlannel VXLAN tunnel
10250 from SERVER_IPTCPkubelet API

Installation workflow

1. Bootstrap server

bash
make provision-server

This runs the Ansible k3s-server.yml playbook which:

  1. Applies common setup (packages, kernel modules, sysctl, UFW)
  2. Installs k3s server with configured flags
  3. Waits for the node to be Ready
  4. Reads the node token (used to join agents)
  5. Fetches kubeconfig with public IP replaced
  6. Optionally sets up WireGuard VPN

2. Bootstrap agent

bash
make provision-agents

This runs the Ansible k3s-agent.yml playbook which:

  1. Applies common setup on agent nodes
  2. Installs k3s agent with server URL and node token
  3. Configures UFW for cluster communication

3. Fetch kubeconfig

bash
make kubeconfig

Fetches /etc/rancher/k3s/k3s.yaml from the server, replaces 127.0.0.1 with the public IP, and merges it into ~/.kube/config under the context name KUBECONFIG_CONTEXT.


Uninstall

bash
make provision-reset       # Uninstall k3s from all nodes (DESTRUCTIVE)

The Ansible reset.yml playbook:

  • Runs the official k3s-uninstall.sh or k3s-agent-uninstall.sh
  • Cleans up CNI interfaces (flannel.1, cni0)
  • Flushes iptables rules
  • Unmounts k3s bind mounts

Node token

The node token is a shared secret that agents use to authenticate with the server API.

  • Auto-generated during server install if K3S_NODE_TOKEN is empty
  • Automatically read from the server by the Ansible site.yml playbook
  • Stored on the server at /var/lib/rancher/k3s/server/node-token

To rotate: uninstall both nodes and reinstall with a new token.


References