211 lines
7.5 KiB
Markdown
211 lines
7.5 KiB
Markdown
# DISCLAIMER
|
|
|
|
This is not really a guide, it's essentially my notes from when I set up k3s on my Odroid MC1 cluster.
|
|
|
|
# Setting up an Odroid MC1/N2 K3S cluster
|
|
|
|
I initially saw a video by NetworkChuck about setting up a Raspberry Pi k3s cluster, see his blog post [here](https://learn.networkchuck.com/courses/take/ad-free-youtube-videos/lessons/26093614-i-built-a-raspberry-pi-super-computer-ft-kubernetes-k3s-cluster-w-rancher). I first went and tried to set up k3s on [my Odroid cluster](https://www.inkletblot.com/hardkernel-odroid-kubernetes-cluster) using his method, however as noted at the bottom of this post, I had some issues with it. So, after some time trying to fix the issues that were preventing me from getting his method working, I went looking for another option.
|
|
|
|
## My personal notes
|
|
|
|
I have included these just in case they lead someone else in the right direction in the future.
|
|
|
|
> It seems networkchuck's setup does not work for me on my odroids, it gets installed but is failing consistently for some reason.
|
|
> I will try with this/these soon:
|
|
> [option 1](https://medium.com/@amadmalik/installing-kubernetes-on-raspberry-pi-k3s-and-docker-on-ubuntu-20-04-ef51e5e56),
|
|
> [option 2](https://computingforgeeks.com/install-kubernetes-on-ubuntu-using-k3s/)
|
|
|
|
> It seems that this is my issue: [Kubernetes CGROUP PIDS](https://forum.odroid.com/viewtopic.php?p=321432&sid=cfa9f65dab7eaa4a56c67b0bafe6ff60#p321432)
|
|
|
|
## The Docker Method
|
|
|
|
### Initial Setup
|
|
|
|
Firstly, set up master and worker hosts.
|
|
|
|
My set up is odroid n2 as master with ip address 180
|
|
and then 5 odroid mc1s as workers with ips 181-5
|
|
|
|
Both network and hostname can be setup by mounting rootfs and manually editing/adding the required files
|
|
|
|
Example netplan '10-config.yaml':
|
|
|
|
#!yaml
|
|
network:
|
|
version: 2
|
|
renderer: networkd
|
|
ethernets:
|
|
eth0:
|
|
addresses: [192.168.0.XXX/16]
|
|
gateway4: 192.168.0.1
|
|
nameservers:
|
|
addresses: [192.168.0.1, 1.1.1.1]
|
|
search: [mydomain]
|
|
|
|
Also set timezone if you want.
|
|
|
|
`sudo timedatectl set-timezone Australia/Adelaide`
|
|
|
|
In my case the following was used:
|
|
|
|
1. flash image to micro sdcard
|
|
2. mount micro sdcard rootfs partition:
|
|
`mount /dev/mmkblk... /mnt/tmp`
|
|
3. edit `/etc/hostname` and add the netplan config above to `/etc/netplan`
|
|
4. unmount `/mnt/tmp`
|
|
5. stick sdcard in odroid SBC and power on.
|
|
|
|
### Kernel Patch for MC1s
|
|
|
|
We must rebuild kernel with updated options so that cgroup_pids is enabled. Hardkernel has a guide [here](https://wiki.odroid.com/odroid-xu4/os_images/linux/ubuntu_5.4/ubuntu_5.4/kernel_build_guide) for rebuilding, only two edits are required after the `make odroidxu4_defconfig` step, they are covered [here](https://forum.odroid.com/viewtopic.php?p=321432&sid=cfa9f65dab7eaa4a56c67b0bafe6ff60#p321432)
|
|
|
|
note that the following tools are required for the build: bison, flex, libssl-dev, and bc
|
|
|
|
`apt install bison flex libssl-dev bc -y`
|
|
|
|
### The K3S install
|
|
|
|
Run the following on all nodes:
|
|
|
|
#!shell
|
|
iptables -F \
|
|
&& update-alternatives --set iptables /usr/sbin/iptables-legacy \
|
|
&& update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy \
|
|
&& reboot
|
|
|
|
apt update; apt upgrade -y; apt autoremove -y; apt clean; apt install docker.io curl -y
|
|
reboot
|
|
|
|
systemctl start docker
|
|
systemctl enable docker
|
|
|
|
systemctl status docker
|
|
|
|
# Be sure that the firewall is disabled for ease
|
|
ufw disable
|
|
|
|
Then run the following only on the master node:
|
|
|
|
#!shell
|
|
# for master
|
|
curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" sh -s - --docker
|
|
|
|
# check its running
|
|
systemctl status k3s
|
|
kubectl get nodes
|
|
|
|
# Get token from master, make sure to store it somewhere
|
|
cat /var/lib/rancher/k3s/server/node-token
|
|
|
|
Then run the following on the worker nodes, updating the command for each:
|
|
|
|
#!shell
|
|
# for workers
|
|
# Fill this out ...
|
|
curl -sfL http://get.k3s.io | K3S_URL=https://<master_IP>:6443 K3S_TOKEN=<join_token>
|
|
K3S_NODE_NAME="odroid-mc1-X" sh -s - --docker
|
|
|
|
systemctl status k3s-agent
|
|
|
|
And thus you should be done, check the master node to see:
|
|
|
|
#!shell
|
|
# Check node was added on master
|
|
kubectl get nodes
|
|
|
|
And all should be up and running correctly, it was for me at least.
|
|
|
|
I have kept the following notes attached here for posterity, really they came first in this effort - chronologically - but given I stopped at random near the end I felt it better to lead with the successful solution.
|
|
|
|
-ink
|
|
|
|
---
|
|
|
|
## Networkchuck
|
|
|
|
This did not initially work for me and I gave up on it, I think the issue was actually the cgroup_pids thing covered above but once I got my second attempt working I didn't want to come back to this.
|
|
|
|
Once these have been set up with ip addresses and hostnames (odroid-n2, odroid-mc1-1(to 5)), you will want to set up ssh access to each machine, I have a couple of ansible playbooks that I use for this.
|
|
|
|
Either the following to set up users and access:
|
|
|
|
#!yml
|
|
- hosts: all
|
|
become: yes
|
|
tasks:
|
|
- name: create the 'kuber' user
|
|
user: name=kuber append=yes state=present createhome=yes bash=/bin/bash
|
|
|
|
- name: allow 'kuber' to have passwordless sudo
|
|
lineinfile:
|
|
dest: /etc/sudoers
|
|
line: "kuber ALL=(ALL) NOPASSWD: ALL"
|
|
validate: "visudo -cf %s"
|
|
|
|
- name: set up authorised keys for the 'kuber' user
|
|
authorized_key: user=kuber key="{{item}}"
|
|
with_file:
|
|
- ~/.ssh/id_rsa.pub
|
|
|
|
|
|
Or if you already set up users:
|
|
|
|
#!yml
|
|
- hosts: all
|
|
become: yes
|
|
tasks:
|
|
- name: set up authorised keys for the 'root' user
|
|
authorized_key: user=root key="{{item}}"
|
|
with_file:
|
|
- ~/.ssh/id_rsa.pub
|
|
|
|
The above can be used with a hosts file such as the following
|
|
|
|
[masters]
|
|
master ansible_host=192.168.0.XXX ansible_user=<user> ansible_ssh_pass=<password>
|
|
|
|
[workers]
|
|
worker1 ansible_host=192.168.0.XXX ansible_user=<user> ansible_ssh_pass=<password>
|
|
worker2...
|
|
...
|
|
|
|
[all:vars]
|
|
ansible_python_interpreter=/usr/bin/python3
|
|
|
|
Then the following commands:
|
|
|
|
#!shell
|
|
sudo iptables -F \
|
|
&& sudo update-alternatives --set iptables /usr/sbin/iptables-legacy \
|
|
&& sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy \
|
|
&& sudo reboot
|
|
|
|
useful command formatted from step 2.2.1 of reference material [here](https://learn.networkchuck.com/courses/take/ad-free-youtube-videos/lessons/26093614-i-built-a-raspberry-pi-super-computer-ft-kubernetes-k3s-cluster-w-rancher)
|
|
|
|
Then the following on the master node:
|
|
|
|
`curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" sh -s -`
|
|
|
|
Then on the master node grab its node token
|
|
|
|
`sudo cat /var/lib/rancher/k3s/server/node-token`
|
|
|
|
Then run the following on each of the workers:
|
|
(note in my case curl was not installed)
|
|
|
|
[your server] = master node ip
|
|
|
|
YOURTOKEN = token from above
|
|
|
|
servername = unique name for node (I use hostname)
|
|
|
|
#!shell
|
|
curl -sfL https://get.k3s.io | K3S_TOKEN="YOURTOKEN"
|
|
K3S_URL="https://[your server]:6443" K3S_NODE_NAME="servername" sh -z
|
|
|
|
# I used
|
|
apt install curl -y && curl -sfL https://get.k3s.io |
|
|
K3S_TOKEN="YOURTOKEN" K3S_URL="https://[your server]:6443" K3S_NODE_NAME="servername" sh -z
|
|
|
|
Sadly this is where my notes ended as, although the install worked, all of the system pods were failing and thus I moved on to the method listed above.
|