Skip to content

K3S Cluster

Nodes

  1. Download de Raspberry pi image tool (64bits image RaspianOS)
  2. image, SSH enabled
  3. In de raspberry pi builder:

ctrl shift x 4. password -> change password 5. add public key (optional) - > .ssh/authorized_keys

1
ssh-copy-id -i ~/.ssh/id_ras.pub pi@<ip address>

Insert SD card into the node and boot it with network connectivity.

Connect to node

sudo apt update
sudo apt full-upgrade
sudo vcgencmd bootloader_version
sudo rpi-eeprom-update -a
sudo raspi-config (update bootloader)

Add variables to the files below

!!! example "/boot/cmdline.txt (end of line)"

cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1

static IP

The preference is to allocate IP addresses based on DHCP reservations. If you choose to assign a static IP, this can be done via the method below.

nano /etc/dhcpcd.conf

interface eth0
static_ip_address=172.16.1.x/26
static routers=172.16.1.1
static domain_name_servers=172.16.1.1

DNS is also offered through DHCP, if a static configuration is desired, this can be done via the method below.

/etc/hosts

IP HOSTNAME (~/.zshrc) = SSH aliases (optional) alias sshhostname='ssh pi@hostname'

Enable iptables

sudo apt-get install -y iptables arptables ebtables

1
2
3
4
sudo su -
sudo iptables -F
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy

After these changes, reboot the node [Q][H]! sudo reboot[Q]

SSD instead of SD (M.2 shield required)

I use SSD instead of SD on all nodes because these disks are faster and more reliable than an SD card. I boot directly from the SSD using a USB3.0 shield for the raspberry pi. Because the SSD is mounted on the shield and connected via USB, you can prepare it in the same way as an SD as described in the previous steps.

If you use a new Raspberry Pi, you must boot it once with the SD card and update the EEPROM/BIOS, after which you can change the boot order to USB as the first boot device:

sudo raspi-config 6. Advanced options > A6. Boot order > B2. USB BOOT

1
lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 111.8G 0 disk ├─sda1 8:1 0 256M 0 part /boot └─sda2 8:2 0 111.5G 0 part /

K3S installation

My cluster consists of 4 nodes and uses etcd high availability, because I use Rancher to manage the cluster, I install a specific version that is supported by Rancher. If you do not use this, you can omit the version, K3S will then be installed with the latest version.

1e node

curl -sfL <https://get.k3s.io> | INSTALL_K3S_VERSION="v1.23.9 k3s1" K3S_KUBECONFIG_MODE="644" K3S_TOKEN=SECRET sh -s - server --cluster-init

2e n node

NB! High Availability on a K3S cluster only works from 3 nodes, installation of the etcd database on all nodes is recommended for recovery, but with fewer than 3 nodes the cluster will not automatically failover to another node in the event of a failure.

curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION="v1.23.9 k3s1" K3S_KUBECONFIG_MODE="644" K3S_TOKEN=SECRET sh -s - server --server https://lc-k3s-n3.loevencloud.nl:6443

Cluster configuration and services

A number of things need to be configured on the cluster.

Helmet installation

1
2
3
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

!!! note "Check whether node is active" - systemctl enable --now k3s-agent - k3s kubectl get node (nodes) - kubectl get pods -A (pods)

Add the jetstack repository

helm repo add jetstack https://charts.jetstack.io

Update the repository list

helm repo update

Certificate manager

To use LetsEncrypt certificates in my cluster I install certificate manager from jetstack

kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.9.1/cert-manager.crds.yaml

of via helm

helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version v1.9.1

Create an issuer to LetsEncrypt, I use the DNS resolver in Azure:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
manager: cert-manager-clusterissuers
name: letsencrypt-production
spec:
acme:
email: <LETSENCRYPT-USERNAME>
preferredChain: ""
privateKeySecretRef:
name: letsencrypt-production-account-ci
server: https://acme-v02.api.letsencrypt.org/directory
solvers:
- dns01:
azureDNS:
clientID: <APPREGID>
clientSecretSecretRef:
key: client-secret
name: azuredns-config
environment: AzurePublicCloud
hostedZoneName: <DNS zone name>
resourceGroupName: <ResourceGroup>
subscriptionID: <SUBID>
tenantID: <TENANTID>

After certificate manager has been installed, you can create a certificate using the correct annotations:

cert-manager.io/cluster-issuer: letsencrypt-production
kubernetes.io/tls-acme: "true"

Install NFS StorageClass

helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=<nfsTarget> --set nfs.path=/volume1/k3s/data/PVC

Rancher

helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
kubectl create namespace cattle-system
helm install rancher rancher-latest/rancher --namespace cattle-system --set hostname=cluster.loevencloud.nl --set replicas=4 --set ingress.tls.source=letsEncrypt --set letsEncrypt.email=<LETSENCRYPTUSERNAME>

kubectl -n cattle-system rollout status deploy/rancher

update rancher

helm repo update
helm fetch rancher-latest/rancher
helm get values rancher -n cattle-system > ranchercurrent.yml
helm upgrade rancher rancher-latest/rancher --namespace cattle-system -f .\ranchercurrent.yml

kubectl -n cattle-system rollout status deploy/rancher

!!! warning "After rancher has been upgraded, wait at least 30 minutes before updating any nodes."

After Rancher is up to date, the nodes can also be provided with the latest Kubernetes version.

Node update via rancher

Go to Cluster management > Edit config > Kubernetes version and select the last supported version. (note a bit wonky tonkie with etcd config.)

first run

echo https://rancher.domein.local/dashboard/?setup=$(kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}')

Cluster update plane

kubectl kustomize build https://github.com/rancher/system-upgrade-controller/blob/master/manifests/system-upgrade-controller.yaml | kubectl apply -f -

Cluster monitoring

prometheus operator

To namespace monitoring

kubectl create namespace monitoring

cd
git clone https://github.com/prometheus-operator/prometheus-operator.git
cd prometheus-operator/
# Check current setting for namespaces in bundle.yaml
grep namespace: bundle.yaml
namespace: default
namespace: default
namespace: default
namespace: default

#We will change that to monitoring:
sed -i 's/namespace: default/namespace: monitoring/g' bundle.yaml

#Check again:
grep namespace: bundle.yaml

namespace: monitoring
namespace: monitoring
namespace: monitoring
namespace: monitoring

Note

get deploy -n monitoring get pods -n monitoring get svc -n monitoring

[https://rpi4cluster.com/monitoring/k3s-svcmonitors/]

Disable cluster

on each node:

/usr/local/bin/k3s-killall.sh

sudo shutdown now