Skip to content

K3S Cluster

Nodes

  1. Download de Raspberry pi image tool (64bits image RaspianOS)
  2. image, SSH enabled
  3. In de raspberry pi builder:

ctrl shift x 4. Password -> Change password 5. Add Public Key (Optional) -> .SH/Authorized_Keys

1
ssh-copy-id -i ~/.ssh/id_ras.pub pi@<ip address>

Place SD-Card in the Node and start it with network connectivity.

Connect to node

sudo apt update
sudo apt full-upgrade
sudo vcgencmd bootloader_version
sudo rpi-eeprom-update -a
sudo raspi-config (update bootloader)

Add variables to the files below

!!! Example "/boot/cmdline.txt (end of line)"

cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1

static IP

The preference is to hand out the allocation of IP addresses based on DHCP reservations, if it is decided to grant a static IP this can be done via the method below.

nano /etc/dhcpcd.conf

interface eth0
static_ip_address=172.16.1.x/26
static routers=172.16.1.1
static domain_name_servers=172.16.1.1

DNS is also offered by means of DHCP, if a static configuration is desired here, this can be done via the method below.

/etc/hosts

IP HOSTNAME (~/.zSHRC) = SSH Aliasen (Optional) alias sshhostname='ssh pi@hostname'

Enable iptables

sudo apt-get install -y iptables arptables ebtables

1
2
3
4
SUDO ITS -
sudo iptables -F
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy

Reboot after these changes the node [Q] [H]! Sudo reboot [Q]

# SSD instead of SD (M.2 Shield Required)

I use SSD instead of SD on all nodes because these disks are faster and more reliable than an SD card, by means of a USB3.0 shield for the Raspberry Pi Boot I directly from the SSD. Because the SSD is mounted on the shield and connected via USB, you can prepare it in the same way as an SD as described in the previous steps.

If you use a new Raspberry Pi you have to start it once with the SD card and update the Eeprom/BIOS, after which you can adjust the boat order to USB as the first Boot Device:

sudo raspi-config 6. Advanced options > A6. Boot order > B2. USB BOOT

1
lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 111.8G 0 disk ├─sda1 8:1 0 256M 0 part /boot └─sda2 8:2 0 111.5G 0 part /

# K3S Installation

My cluster consists of 4 nodes and uses ETCD High Availability, because I use Rancher for the management of the cluster I install a specific version supported by Rancher. If you do not use this you can omit the version, K3S will be installed with the latest version.

External DB HA configuratie

If you don't want to use etcD HA you can use an external database:

curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" K3S_TOKEN=SECRET sh -s - server --datastore-endpoint="mysql://username:password@tcp(hostname:3306)/k3s" --cluster-init

1e node etcd HA

curl -sfL <https://get.k3s.io> | INSTALL_K3S_VERSION="v1.23.9 k3s1" K3S_KUBECONFIG_MODE="644" K3S_TOKEN=SECRET sh -s - server --cluster-init

2e n node

NB! High availability on a K3S cluster only works from 3 nodes, installation of the ETCD database on all nodes is advisable for recovery, but with fewer than 3 nodes the cluster will not do an automatic failover to another node in the event of failure.

curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION="v1.23.9 k3s1" K3S_KUBECONFIG_MODE="644" K3S_TOKEN=SECRET sh -s - server --server https://lc-k3s-n3.loevencloud.nl:6443

external DB HA

curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" K3S_TOKEN=SECRET sh -s - server --datastore-endpoint="mysql://username:password@tcp(syn04.loevencloud.nl:3306)/k3s"
Cat/was/lib/rancher/k3s/server/token
curl -sfL https://get.k3s.io | sh -s - server --token=SECRET --datastore-endpoint="mysql://username:password@tcp(hostname:3306)/database-name"

curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" sh -s - server --token=SECRET --datastore-endpoint="mysql://k3s:#Roodenbroek8@tcp(syn04.loevencloud.nl:3306)/k3s"

# Cluster Configuration and Services

A number of things must be configured on the cluster. LOG of commands: kubectl config view --raw > ~/.kube/config Installation Cert Manager Installation Issuer Installation Rancher

kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}{{ "\n" }}'

# # Helm Installation

1
2
3
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

!!! Note "Check whether Node is active" - systemctl enable --now k3s-agent - k3s kubectl get node (nodes) - kubectl get pods -A (pods)

# # # Add the Jetstack Repository

helm repo add jetstack https://charts.jetstack.io

# # # update the Repository List

helm repo update

Certificate manager

To use LetsCrypt certificates in my cluster, I install [certificate manager of Jetstack] (https://cert-manager.io/docs/installation/)

kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.9.1/cert-manager.crds.yaml

of via helm

helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version v1.9.1

helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version v1.18.0

Make an Issuer to LetsCrypt, I use the DNS Resolver in Azure:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
manager: cert-manager-clusterissuers
name: letsencrypt-production
spec:
acme:
email: <LETSENCRYPT-USERNAME>
preferredChain: ""
Privateysecrefref:
name: letsencrypt-production-account-ci
server: https://acme-v02.api.letsencrypt.org/directory
solvers:
- dns01:
azureDNS:
clientID: <APPREGID>
clientSecretSecretRef:
key: client-secret
name: azuredns-config
environment: AzurePublicCloud
Hostedzonename: <DNS Zone Name>
resourceGroupName: <ResourceGroup>
subscriptionID: <SUBID>
tenantID: <TENANTID>

After Certificate Manager has been installed, you can create a certificate through the right Annotations:

cert-manager.io/cluster-issuer: letsencrypt-production
kubernetes.io/tls-acme: "true"

Install NFS StorageClass

helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=<nfsTarget> --set nfs.path=/volume1/k3s/data/PVC

Rancher

helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
kubectl create namespace cattle-system
helm install rancher rancher-latest/rancher --namespace cattle-system --set hostname=cluster.loevencloud.nl --set replicas=4 --set ingress.tls.source=letsEncrypt --set letsEncrypt.email=<LETSENCRYPTUSERNAME>

kubectl -n cattle-system rollout status deploy/rancher

update rancher

helm repo update
helm fetch rancher-latest/rancher
helm get values rancher -n cattle-system > ranchercurrent.yml
helm upgrade rancher rancher-latest/rancher --namespace cattle-system -f .\ranchercurrent.yml

kubectl -n cattle-system rollout status deploy/rancher

!!! Warning "Wait after Rancher is Gegraded at least 30 minutes before you update nodes or something."

After Rancher is up to date, the nodes can also be provided with the last Kubernetes version.

Node update via rancher

Go to Cluster Management> Edit Config> Kubernetes Version and select the last supported version. (Note Little Wonky Tonkie with Etcd Config.)

first run

echo https://rancher.domein.local/dashboard/?setup=$(kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}')

Cluster update plane

kubectl kustomize build https://github.com/rancher/system-upgrade-controller/blob/master/manifests/system-upgrade-controller.yaml | kubectl apply -f -

Cluster monitoring

prometheus operator

To NameSpace Monitoring

kubectl create namespace monitoring

cd
git clone https://github.com/prometheus-operator/prometheus-operator.git
cd prometheus-operator/
# Check current setting for namespaces in bundle.yaml
grep namespace: bundle.yaml
namespace: default
namespace: default
namespace: default
namespace: default

#We will change that to monitoring:
sed -i 's/namespace: default/namespace: monitoring/g' bundle.yaml

#Check again:
grep namespace: bundle.yaml

namespace: monitoring
namespace: monitoring
namespace: monitoring
namespace: monitoring

Note

get deploy -n monitoring get pods -n monitoring get svc -n monitoring

[https://rpi4cluster.com/monitoring/k3s-svcmonitors/]

# Disable Cluster

On every node:

/usr/local/bin/k3s-killall.sh

sudo shutdown now