How to Set Up a Raspberry Pi 5 k3s Kubernetes Cluster for Edge Computing 2026 👋







Deploy a lightweight Kubernetes cluster on Raspberry Pi 5 boards with k3s—perfect for running edge applications, CI pipelines, and IoT services at home or in your garage lab. In 2026, “raspberry pi 5 k3s kubernetes cluster setup” is a zero-competition long-tail keyword that’ll help you rank fast. Let’s get your Pi swarm humming in under 15 minutes.


---


📌 Table of Contents


1. What Is a Raspberry Pi 5 k3s Kubernetes Cluster? 🧠  

2. Why Use k3s on Raspberry Pi 5 for Edge Computing?  

3. Step-by-Step Guide: Raspberry Pi 5 k3s Kubernetes Cluster Setup  

   1) Prepare Pi 5 Nodes & SSH Access  

   2) Install k3s on the Server Node  

   3) Join Worker Nodes to the k3s Cluster  

   4) Verify Node Status & Apply Labels  

   5) Deploy a Test Edge Application  

   6) (Optional) Add Local Storage with Longhorn  

4. Comparing k3s vs. MicroK8s on Raspberry Pi (No Tables)  

5. My Edge Lab Story: Cluster Lessons Learned  

6. Frequently Asked Questions (FAQ)  

7. Why This Matters in 2026 🌙  

8. What You Can Take Away 📝  

9. Sources & Further Reading  


---


What Is a Raspberry Pi 5 k3s Kubernetes Cluster? 🧠


A Raspberry Pi 5 k3s Kubernetes cluster is a miniaturized, ARM-based orchestration system using Rancher’s k3s—a certified, lightweight Kubernetes distribution. It runs core Kubernetes services (API server, etcd, scheduler) on one Pi acting as the server node, while additional Pi 5 boards join as workers, hosting your pods and services.  


Think of it as a full-blown cloud platform—only in your garage, under 10 W per node, and zero monthly bills.


---


Why Use k3s on Raspberry Pi 5 for Edge Computing?


- Ultra-lightweight: k3s strips out non-essential components, runs smoothly on 1 GB RAM.  

- Low power: Pi 5 draws ~5 W—ideal for always-on edge workloads.  

- Easy management: single binary, automatic manifest upgrades.  

- Scalability: add or remove nodes in seconds—no cluster rebuild.  


Real talk: I once tried full Kubernetes on Pi 4—memory leaks crashed my cluster. Switching to k3s on Pi 5 felt like upgrading from a moped to a sportbike.


---


Step-by-Step Guide: Raspberry Pi 5 k3s Kubernetes Cluster Setup


> Pro tip: test connectivity after every major step—SSH flakes kill clusters.


1) Prepare Pi 5 Nodes & SSH Access


- Flash Raspberry Pi OS Lite (64-bit) on each microSD (use BalenaEtcher).  

- Enable SSH: create an empty ssh file in the boot partition.  

- Boot each Pi 5 with Ethernet connected—static IPs help:  

  - 192.168.1.110 → server  

  - 192.168.1.111, .112 → workers  

- SSH in:  

  `bash

  ssh pi@192.168.1.xxx

  `

- Update OS:  

  `bash

  sudo apt update && sudo apt upgrade -y

  `

  

Note: I forgot to change one Pi’s default password—cluster join failed. Change passwords early.


2) Install k3s on the Server Node


On the server Pi (192.168.1.110):


`bash

curl -sfL https://get.k3s.io | INSTALLK3SVERSION="v1.28.2+k3s1" sh -

`


- This sets up k3s and systemd service.  

- Check status:  

  `bash

  sudo systemctl status k3s

  `

- Get the join token:  

  `bash

  sudo cat /var/lib/rancher/k3s/server/node-token

  `


3) Join Worker Nodes to the k3s Cluster


On each worker Pi (192.168.1.111, .112):


`bash

curl -sfL https://get.k3s.io | K3S_URL="https://192.168.1.110:6443" \

  K3STOKEN="YOURNODE_TOKEN" sh -

`


Replace YOURNODETOKEN with the content from the server’s node-token file.  


- After install, verify k3s-agent service:  

  `bash

  sudo systemctl status k3s-agent

  `


If you hit TLS errors, check that all Pis’ system clocks are synced (sudo apt install -y chrony).


4) Verify Node Status & Apply Labels


On the server Pi:


`bash

kubectl get nodes

`


You should see server and two agent nodes in Ready state.  


Label worker nodes for edge workloads:


`bash

kubectl label node pi-5-worker1 edge=true

kubectl label node pi-5-worker2 edge=true

`


—Edge scheduling: pods with nodeSelector: edge: "true" land on these workers.


5) Deploy a Test Edge Application


Create edge-deploy.yaml:


`yaml

apiVersion: apps/v1

kind: Deployment

metadata:

  name: edge-demo

spec:

  replicas: 2

  selector:

    matchLabels:

      app: edge-sensor

  template:

    metadata:

      labels:

        app: edge-sensor

    spec:

      nodeSelector:

        edge: "true"

      containers:

      - name: sensor-app

        image: busybox

        command: ["sh", "-c", "while true; do echo ping from $(hostname); sleep 10; done"]

`


Apply it:


`bash

kubectl apply -f edge-deploy.yaml

kubectl get pods -l app=edge-sensor -o wide

`


You’ll see pods scheduled on worker nodes—this proves your edge cluster is live.


6) (Optional) Add Local Storage with Longhorn


For persistent volumes:


1. Install Longhorn via Helm:


   `bash

   curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

   helm repo add longhorn https://charts.longhorn.io

   helm repo update

   helm install longhorn longhorn/longhorn --namespace longhorn-system --create-namespace

   `


2. Create a StorageClass named longhorn.  

3. Change your Deployment spec to request storageClassName: longhorn.


Longhorn UI runs on http://<server-ip>:30888—handy for volume management.


---


Comparing k3s vs. MicroK8s on Raspberry Pi (No Tables)


MicroK8s on Pi 5  

• Pros: Ubuntu-supported; Snap installs; multiple “addons.”  

• Cons: heavier; memory overhead; can conflict with Ubuntu core services.


k3s on Pi 5  

• Pros: single binary; minimal footprint; quick upgrades.  

• Cons: fewer built-in addons; external tooling needed (Helm, Longhorn).


If you want lean and mean edge clusters, k3s wins. For full-stack dev environments, MicroK8s can work—just expect slower boots.


---


My Edge Lab Story: Cluster Lessons Learned


Back in late 2024, I cobbled together Pi 3 boards—tried vanilla kubeadm. It broke every other day.  


In my agency days, I spun up thousands of cloud containers—so I thought Pi clusters would be similar. Nope. Memory limits, flapping network interfaces, timeouts—real headaches.


After upgrading to Pi 5 and switching to k3s, my local Grafana-Prometheus stack ran without hiccups. The key? less bloat, more focus on ARM optimization.


---


Frequently Asked Questions (FAQ)


Q1: Can I mix Pi 4 and Pi 5 in the same k3s cluster?

A: Yes—k3s supports mixed ARM64 nodes. Just label slower Pi 4 workers separately.


Q2: How do I upgrade k3s versions?

A:  

`bash

sudo apt-mark unhold k3s

curl -sfL https://get.k3s.io | INSTALLK3SVERSION="v1.xx.x+k3s1" sh -

`


Q3: What if I lose the server node?

A: Backup /etc/rancher/k3s/k3s.yaml and /var/lib/rancher/k3s/server regularly. Use etcd HA for production.


Q4: Can I run GPU workloads on Pi 5 cluster?

A: Pi 5 has no GPU compute—stick to CPU-bound or TinyML edge tasks.


Q5: How do I expose services externally?

A: Use NodePort, LoadBalancer with MetalLB, or install Ingress controllers like Traefik.


---


Why This Matters in 2026 🌙


Edge computing is surging in smart factories, autonomous vehicles, and IoT deployments. A raspberry pi 5 k3s kubernetes cluster setup gives you hands-on experience with real-world orchestration on low-cost hardware. You learn scaling, resilience, and ARM-optimized container workflows—skills in high demand as cloud and edge converge.


---


What You Can Take Away 📝


- Consistent OS images: clone your Pi cards with dd or Raspberry Pi Imager.  

- Sync clocks: install chrony to avoid TLS and certificate errors.  

- Label nodes early—simplifies pod scheduling.  

- Back up k3s.yaml, node tokens, and Longhorn volumes.  

- Monitor with lightweight tools: kubectl top nodes, Prometheus Node Exporter.


---


Sources & Further Reading


- k3s Official Docs – https://k3s.io/docs/  

- Raspberry Pi OS Lite Guide – https://www.raspberrypi.com/documentation/  

- Longhorn Storage – https://longhorn.io/  

- MetalLB Load Balancer – https://metallb.universe.tf/  

- Related: [How to Monitor Pi 5 Cluster with Prometheus & Grafana]  


Get hands-on with Kubernetes at the edge—your Pi 5 k3s cluster awaits!

Post a Comment

Previous Post Next Post