updated post

This commit is contained in:
Solomon Laing 2021-12-21 20:22:12 +10:30
parent 6f4b516bcf
commit e4e1283466

View File

@ -31,21 +31,21 @@ Both network and hostname can be setup by mounting rootfs and manually editing/a
Example netplan '10-config.yaml': Example netplan '10-config.yaml':
```yaml ```yaml
network: network:
version: 2 version: 2
renderer: networkd renderer: networkd
ethernets: ethernets:
eth0: eth0:
addresses: [192.168.0.XXX/16] addresses: [192.168.0.XXX/16]
gateway4: 192.168.0.1 gateway4: 192.168.0.1
nameservers: nameservers:
addresses: [192.168.0.1, 1.1.1.1] addresses: [192.168.0.1, 1.1.1.1]
search: [mydomain] search: [mydomain]
``` ```
Also set timezone if you want. Also set timezone if you want.
``` bash ```shell
sudo timedatectl set-timezone Australia/Adelaide sudo timedatectl set-timezone Australia/Adelaide
``` ```
@ -64,7 +64,7 @@ We must rebuild kernel with updated options so that cgroup_pids is enabled. Hard
note that the following tools are required for the build: bison, flex, libssl-dev, and bc note that the following tools are required for the build: bison, flex, libssl-dev, and bc
``` bash ```shell
apt install bison flex libssl-dev bc -y apt install bison flex libssl-dev bc -y
``` ```
@ -72,54 +72,53 @@ apt install bison flex libssl-dev bc -y
Run the following on all nodes: Run the following on all nodes:
``` bash ```shell
iptables -F \
&& update-alternatives --set iptables /usr/sbin/iptables-legacy \
&& update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy \
&& reboot
iptables -F \ apt update; apt upgrade -y; apt autoremove -y; apt clean; apt install docker.io curl -y
&& update-alternatives --set iptables /usr/sbin/iptables-legacy \ reboot
&& update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy \
&& reboot
apt update; apt upgrade -y; apt autoremove -y; apt clean; apt install docker.io curl -y systemctl start docker
reboot systemctl enable docker
systemctl start docker systemctl status docker
systemctl enable docker
systemctl status docker # Be sure that the firewall is disabled for ease
ufw disable
# Be sure that the firewall is disabled for ease
ufw disable
``` ```
Then run the following only on the master node: Then run the following only on the master node:
``` bash ```shell
# for master # for master
curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" sh -s - --docker curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" sh -s - --docker
# check its running # check its running
systemctl status k3s systemctl status k3s
kubectl get nodes kubectl get nodes
# Get token from master, make sure to store it somewhere # Get token from master, make sure to store it somewhere
cat /var/lib/rancher/k3s/server/node-token cat /var/lib/rancher/k3s/server/node-token
``` ```
Then run the following on the worker nodes, updating the command for each: Then run the following on the worker nodes, updating the command for each:
``` bash ```shell
# for workers # for workers
# Fill this out ... # Fill this out ...
curl -sfL http://get.k3s.io | K3S_URL=https://<master_IP>:6443 K3S_TOKEN=<join_token> K3S_NODE_NAME="odroid-mc1-X" sh -s - --docker curl -sfL http://get.k3s.io | K3S_URL=https://<master_IP>:6443 K3S_TOKEN=<join_token> K3S_NODE_NAME="odroid-mc1-X" sh -s - --docker
systemctl status k3s-agent systemctl status k3s-agent
``` ```
And thus you should be done, check the master node to see: And thus you should be done, check the master node to see:
``` bash ```shell
# Check node was added on master # Check node was added on master
kubectl get nodes kubectl get nodes
``` ```
And all should be up and running correctly, it was for me at least. And all should be up and running correctly, it was for me at least.
@ -139,71 +138,71 @@ Once these have been set up with ip addresses and hostnames (odroid-n2, odroid-m
Either the following to set up users and access: Either the following to set up users and access:
```yml ```yml
- hosts: all - hosts: all
become: yes become: yes
tasks: tasks:
- name: create the 'kuber' user - name: create the 'kuber' user
user: name=kuber append=yes state=present createhome=yes bash=/bin/bash user: name=kuber append=yes state=present createhome=yes bash=/bin/bash
- name: allow 'kuber' to have passwordless sudo - name: allow 'kuber' to have passwordless sudo
lineinfile: lineinfile:
dest: /etc/sudoers dest: /etc/sudoers
line: "kuber ALL=(ALL) NOPASSWD: ALL" line: "kuber ALL=(ALL) NOPASSWD: ALL"
validate: "visudo -cf %s" validate: "visudo -cf %s"
- name: set up authorised keys for the 'kuber' user - name: set up authorised keys for the 'kuber' user
authorized_key: user=kuber key="{{item}}" authorized_key: user=kuber key="{{item}}"
with_file: with_file:
- ~/.ssh/id_rsa.pub - ~/.ssh/id_rsa.pub
``` ```
Or if you already set up users: Or if you already set up users:
```yml ```yml
- hosts: all - hosts: all
become: yes become: yes
tasks: tasks:
- name: set up authorised keys for the 'root' user - name: set up authorised keys for the 'root' user
authorized_key: user=root key="{{item}}" authorized_key: user=root key="{{item}}"
with_file: with_file:
- ~/.ssh/id_rsa.pub - ~/.ssh/id_rsa.pub
``` ```
The above can be used with a hosts file such as the following The above can be used with a hosts file such as the following
``` ```
[masters] [masters]
master ansible_host=192.168.0.XXX ansible_user=<user> ansible_ssh_pass=<password> master ansible_host=192.168.0.XXX ansible_user=<user> ansible_ssh_pass=<password>
[workers] [workers]
worker1 ansible_host=192.168.0.XXX ansible_user=<user> ansible_ssh_pass=<password> worker1 ansible_host=192.168.0.XXX ansible_user=<user> ansible_ssh_pass=<password>
worker2... worker2...
... ...
[all:vars] [all:vars]
ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3
``` ```
Then the following commands: Then the following commands:
``` bash ```shell
sudo iptables -F \ sudo iptables -F \
&& sudo update-alternatives --set iptables /usr/sbin/iptables-legacy \ && sudo update-alternatives --set iptables /usr/sbin/iptables-legacy \
&& sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy \ && sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy \
&& sudo reboot && sudo reboot
``` ```
useful command formatted from step 2.2.1 of reference material [here](https://learn.networkchuck.com/courses/take/ad-free-youtube-videos/lessons/26093614-i-built-a-raspberry-pi-super-computer-ft-kubernetes-k3s-cluster-w-rancher) useful command formatted from step 2.2.1 of reference material [here](https://learn.networkchuck.com/courses/take/ad-free-youtube-videos/lessons/26093614-i-built-a-raspberry-pi-super-computer-ft-kubernetes-k3s-cluster-w-rancher)
Then the following on the master node: Then the following on the master node:
``` bash ```shell
curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" sh -s - curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" sh -s -
``` ```
Then on the master node grab its node token Then on the master node grab its node token
``` bash ```shell
sudo cat /var/lib/rancher/k3s/server/node-token sudo cat /var/lib/rancher/k3s/server/node-token
``` ```
@ -216,11 +215,11 @@ YOURTOKEN = token from above
servername = unique name for node (I use hostname) servername = unique name for node (I use hostname)
``` bash ```shell
curl -sfL https://get.k3s.io | K3S_TOKEN="YOURTOKEN" K3S_URL="https://[your server]:6443" K3S_NODE_NAME="servername" sh -z curl -sfL https://get.k3s.io | K3S_TOKEN="YOURTOKEN" K3S_URL="https://[your server]:6443" K3S_NODE_NAME="servername" sh -z
# I used # I used
apt install curl -y && curl -sfL https://get.k3s.io | K3S_TOKEN="YOURTOKEN" K3S_URL="https://[your server]:6443" K3S_NODE_NAME="servername" sh -z apt install curl -y && curl -sfL https://get.k3s.io | K3S_TOKEN="YOURTOKEN" K3S_URL="https://[your server]:6443" K3S_NODE_NAME="servername" sh -z
``` ```
Sadly this is where my notes ended as, although the install worked, all of the system pods were failing and thus I moved on to the method listed above. Sadly this is where my notes ended as, although the install worked, all of the system pods were failing and thus I moved on to the method listed above.