Kubernetes Cluster on Raspberry Pi B (part 2 – Base OS and software prep)

Continuing my documentation on building this Pi cluster. If you want to read the full thing, start at Part 1 – hardware.

Now that we’ve got all our hardware assembled, it’s time to prep the base system.

Base OS

I was using a windows system, so I used balenaetcher to write the raspberry pi OS to disk.

I choose Raspian Buster Lite for the OS.

Use etcher to write the image to each SD card. After the image is written, eject and then put the card back in your system, and you should mount the /boot partition. Create an empty file called “ssh” on the root of the boot system to enable the ssh service at boot.

As these SD cards are written out, you can put each one into a pi and start configuring it. I’m seeing the arpwatch alerts going out when a new mac address connects to my network, so I know what ip address is being allocated as each pi comes online.

So, for each pi – ssh into it. I found there was a good order that I liked so I didn’t screw up my static DNS that I’d setup.

  1. Configure static ip (etc/dhcpcd.conf) – important note, you must not have more than two namedserver lines, or coredns will fail
  2. Optional – configure dns (I did kube1-4)
  3. Reboot to get new ip (I should just be able to restart networking)
  4. use raspi-config to set hostname, timezone, enable ssh (not sure if that’s necessary), and expand the file system

TMUX:
I was using tmux with a multiple server option to run these commands on all servers at once. This was another thing that I wanted to play around with – I knew other people did it, but hadn’t had a real reason to want to do it before.

Prereqs for tmux-multi-server:

  1. Have tmux installed on each destination server (raspberry pi)
  2. Have tmux installed on your source terminal (ubuntu under win10)
  3. Already used ssh-copy-id to remove auth between terminal and server
  4. ssh-multi script, I used this one from github

Once this is done, I can run:
./ssh-multi.sh -d "kube1 kube2 kube3 kube4"
and now, I have one terminal session with all four ssh sessions in it – and typing once echos across all servers at the same time.

On all systems:

  1. Disable IPv6 – at least for me, IPv6 caused errors with bringing up the worker nodes. sudo nano /etc/sysctl.conf
  2. Inside the file editor, add these lines:
    net.ipv6.conf.all.disable_ipv6 = 1
    net.ipv6.conf.default.disable_ipv6 = 1
    net.ipv6.conf.lo.disable_ipv6 = 1
  3. Edit /boot/cmdline.txt for kubernetes support, add this to the end of the line cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory

Begin installing Apps

At this point, I end the multi-ssh, and log in only to my first pi, which will be the master for the cluster. My mistake before was using kubeadm, instead use K3s – see full instructions here.

On your master/controller, install and setup the cluster admin:
$ curl -sfL https://get.k3s.io | sh -
Check that the systemd service started correctly:
$ sudo systemctl status k3s

Get the node token so you can join your workers to the master
$ sudo cat /var/lib/rancher/k3s/server/node-tokenK1089729d4ab5e51a44b1871768c7c04ad80bc6319d7bef5d94c7caaf9b0bd29efc::node:1fcdc14840494f3ebdcad635c7b7a9b7

Log out of the first pi, and log into all your worker (I used ssh-multi again)
curl -sfL https://get.k3s.io | sh -
sudo k3s agent --server http://192.168.1.20 --token K1089729d4ab5e51a44b1871768c7c04ad80bc6319d7bef5d94c7caaf9b0bd29efc::node:1fcdc14840494f3ebdcad635c7b7a9b7

If you want to be able to use a short name for your server (ie: –server https://kube1:6443) then you will need to edit your /etc/dhcpd.conf file to support domain search:
static domain_search=zinger.org

You should see the log messages going across your screen as all the units join the master. If you’re getting an error message about invalid CA cert at 127.0.0.1, chances are your name resolution isn’t working. Try pinging the master server name and review the above settings until it works.

Now you can log into the master directly and see the list of nodes:
sudo kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kube3 Ready worker 5h59m v1.15.4-k3s.1 192.168.1.22 Raspbian GNU/Linux 10 (buster) 4.19.75-v7l+ containerd://1.2.8-k3s.1
kube2 Ready worker 5h59m v1.15.4-k3s.1 192.168.1.21 Raspbian GNU/Linux 10 (buster) 4.19.75-v7l+ containerd://1.2.8-k3s.1
kube4 Ready worker 5h59m v1.15.4-k3s.1 192.168.1.23 Raspbian GNU/Linux 10 (buster) 4.19.75-v7l+ containerd://1.2.8-k3s.1
kube1 Ready master 6h14m v1.15.4-k3s.1 192.168.1.20 Raspbian GNU/Linux 10 (buster) 4.19.75-v7l+ containerd://1.2.8-k3s.1

And get your list of pods
sudo kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system helm-install-traefik-fscgm 0/1 Completed 0 6h15m
kube-system coredns-66f496764-fkzc5 1/1 Running 3 6h15m
kube-system svclb-traefik-8ljd4 3/3 Running 9 6h14m
kube-system traefik-d869575c8-7wgqk 1/1 Running 3 6h14m
kube-system svclb-traefik-2r86x 3/3 Running 15 6h
kube-system svclb-traefik-nmbh9 3/3 Running 15 6h
kube-system svclb-traefik-jbwfs 3/3 Running 15 6h

Next step – remote admin.

This entry was posted in Technology. Bookmark the permalink.

Leave a Reply