updated post
This commit is contained in:
parent
6f4b516bcf
commit
e4e1283466
@ -31,21 +31,21 @@ Both network and hostname can be setup by mounting rootfs and manually editing/a
|
||||
Example netplan '10-config.yaml':
|
||||
|
||||
```yaml
|
||||
network:
|
||||
version: 2
|
||||
renderer: networkd
|
||||
ethernets:
|
||||
eth0:
|
||||
addresses: [192.168.0.XXX/16]
|
||||
gateway4: 192.168.0.1
|
||||
nameservers:
|
||||
addresses: [192.168.0.1, 1.1.1.1]
|
||||
search: [mydomain]
|
||||
network:
|
||||
version: 2
|
||||
renderer: networkd
|
||||
ethernets:
|
||||
eth0:
|
||||
addresses: [192.168.0.XXX/16]
|
||||
gateway4: 192.168.0.1
|
||||
nameservers:
|
||||
addresses: [192.168.0.1, 1.1.1.1]
|
||||
search: [mydomain]
|
||||
```
|
||||
|
||||
Also set timezone if you want.
|
||||
|
||||
``` bash
|
||||
```shell
|
||||
sudo timedatectl set-timezone Australia/Adelaide
|
||||
```
|
||||
|
||||
@ -64,7 +64,7 @@ We must rebuild kernel with updated options so that cgroup_pids is enabled. Hard
|
||||
|
||||
note that the following tools are required for the build: bison, flex, libssl-dev, and bc
|
||||
|
||||
``` bash
|
||||
```shell
|
||||
apt install bison flex libssl-dev bc -y
|
||||
```
|
||||
|
||||
@ -72,54 +72,53 @@ apt install bison flex libssl-dev bc -y
|
||||
|
||||
Run the following on all nodes:
|
||||
|
||||
``` bash
|
||||
```shell
|
||||
iptables -F \
|
||||
&& update-alternatives --set iptables /usr/sbin/iptables-legacy \
|
||||
&& update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy \
|
||||
&& reboot
|
||||
|
||||
iptables -F \
|
||||
&& update-alternatives --set iptables /usr/sbin/iptables-legacy \
|
||||
&& update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy \
|
||||
&& reboot
|
||||
apt update; apt upgrade -y; apt autoremove -y; apt clean; apt install docker.io curl -y
|
||||
reboot
|
||||
|
||||
apt update; apt upgrade -y; apt autoremove -y; apt clean; apt install docker.io curl -y
|
||||
reboot
|
||||
systemctl start docker
|
||||
systemctl enable docker
|
||||
|
||||
systemctl start docker
|
||||
systemctl enable docker
|
||||
systemctl status docker
|
||||
|
||||
systemctl status docker
|
||||
|
||||
# Be sure that the firewall is disabled for ease
|
||||
ufw disable
|
||||
# Be sure that the firewall is disabled for ease
|
||||
ufw disable
|
||||
```
|
||||
|
||||
Then run the following only on the master node:
|
||||
|
||||
``` bash
|
||||
# for master
|
||||
curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" sh -s - --docker
|
||||
```shell
|
||||
# for master
|
||||
curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" sh -s - --docker
|
||||
|
||||
# check its running
|
||||
systemctl status k3s
|
||||
kubectl get nodes
|
||||
# check its running
|
||||
systemctl status k3s
|
||||
kubectl get nodes
|
||||
|
||||
# Get token from master, make sure to store it somewhere
|
||||
cat /var/lib/rancher/k3s/server/node-token
|
||||
# Get token from master, make sure to store it somewhere
|
||||
cat /var/lib/rancher/k3s/server/node-token
|
||||
```
|
||||
|
||||
Then run the following on the worker nodes, updating the command for each:
|
||||
|
||||
``` bash
|
||||
# for workers
|
||||
# Fill this out ...
|
||||
curl -sfL http://get.k3s.io | K3S_URL=https://<master_IP>:6443 K3S_TOKEN=<join_token> K3S_NODE_NAME="odroid-mc1-X" sh -s - --docker
|
||||
```shell
|
||||
# for workers
|
||||
# Fill this out ...
|
||||
curl -sfL http://get.k3s.io | K3S_URL=https://<master_IP>:6443 K3S_TOKEN=<join_token> K3S_NODE_NAME="odroid-mc1-X" sh -s - --docker
|
||||
|
||||
systemctl status k3s-agent
|
||||
systemctl status k3s-agent
|
||||
```
|
||||
|
||||
And thus you should be done, check the master node to see:
|
||||
|
||||
``` bash
|
||||
# Check node was added on master
|
||||
kubectl get nodes
|
||||
```shell
|
||||
# Check node was added on master
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
And all should be up and running correctly, it was for me at least.
|
||||
@ -139,71 +138,71 @@ Once these have been set up with ip addresses and hostnames (odroid-n2, odroid-m
|
||||
Either the following to set up users and access:
|
||||
|
||||
```yml
|
||||
- hosts: all
|
||||
become: yes
|
||||
tasks:
|
||||
- name: create the 'kuber' user
|
||||
user: name=kuber append=yes state=present createhome=yes bash=/bin/bash
|
||||
- hosts: all
|
||||
become: yes
|
||||
tasks:
|
||||
- name: create the 'kuber' user
|
||||
user: name=kuber append=yes state=present createhome=yes bash=/bin/bash
|
||||
|
||||
- name: allow 'kuber' to have passwordless sudo
|
||||
lineinfile:
|
||||
dest: /etc/sudoers
|
||||
line: "kuber ALL=(ALL) NOPASSWD: ALL"
|
||||
validate: "visudo -cf %s"
|
||||
- name: allow 'kuber' to have passwordless sudo
|
||||
lineinfile:
|
||||
dest: /etc/sudoers
|
||||
line: "kuber ALL=(ALL) NOPASSWD: ALL"
|
||||
validate: "visudo -cf %s"
|
||||
|
||||
- name: set up authorised keys for the 'kuber' user
|
||||
authorized_key: user=kuber key="{{item}}"
|
||||
with_file:
|
||||
- ~/.ssh/id_rsa.pub
|
||||
- name: set up authorised keys for the 'kuber' user
|
||||
authorized_key: user=kuber key="{{item}}"
|
||||
with_file:
|
||||
- ~/.ssh/id_rsa.pub
|
||||
```
|
||||
|
||||
Or if you already set up users:
|
||||
|
||||
```yml
|
||||
- hosts: all
|
||||
become: yes
|
||||
tasks:
|
||||
- name: set up authorised keys for the 'root' user
|
||||
authorized_key: user=root key="{{item}}"
|
||||
with_file:
|
||||
- ~/.ssh/id_rsa.pub
|
||||
- hosts: all
|
||||
become: yes
|
||||
tasks:
|
||||
- name: set up authorised keys for the 'root' user
|
||||
authorized_key: user=root key="{{item}}"
|
||||
with_file:
|
||||
- ~/.ssh/id_rsa.pub
|
||||
```
|
||||
|
||||
The above can be used with a hosts file such as the following
|
||||
|
||||
```
|
||||
[masters]
|
||||
master ansible_host=192.168.0.XXX ansible_user=<user> ansible_ssh_pass=<password>
|
||||
[masters]
|
||||
master ansible_host=192.168.0.XXX ansible_user=<user> ansible_ssh_pass=<password>
|
||||
|
||||
[workers]
|
||||
worker1 ansible_host=192.168.0.XXX ansible_user=<user> ansible_ssh_pass=<password>
|
||||
worker2...
|
||||
...
|
||||
[workers]
|
||||
worker1 ansible_host=192.168.0.XXX ansible_user=<user> ansible_ssh_pass=<password>
|
||||
worker2...
|
||||
...
|
||||
|
||||
[all:vars]
|
||||
ansible_python_interpreter=/usr/bin/python3
|
||||
[all:vars]
|
||||
ansible_python_interpreter=/usr/bin/python3
|
||||
```
|
||||
|
||||
Then the following commands:
|
||||
|
||||
``` bash
|
||||
sudo iptables -F \
|
||||
&& sudo update-alternatives --set iptables /usr/sbin/iptables-legacy \
|
||||
&& sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy \
|
||||
&& sudo reboot
|
||||
```shell
|
||||
sudo iptables -F \
|
||||
&& sudo update-alternatives --set iptables /usr/sbin/iptables-legacy \
|
||||
&& sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy \
|
||||
&& sudo reboot
|
||||
```
|
||||
|
||||
useful command formatted from step 2.2.1 of reference material [here](https://learn.networkchuck.com/courses/take/ad-free-youtube-videos/lessons/26093614-i-built-a-raspberry-pi-super-computer-ft-kubernetes-k3s-cluster-w-rancher)
|
||||
|
||||
Then the following on the master node:
|
||||
|
||||
``` bash
|
||||
```shell
|
||||
curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" sh -s -
|
||||
```
|
||||
|
||||
Then on the master node grab its node token
|
||||
|
||||
``` bash
|
||||
```shell
|
||||
sudo cat /var/lib/rancher/k3s/server/node-token
|
||||
```
|
||||
|
||||
@ -216,11 +215,11 @@ YOURTOKEN = token from above
|
||||
|
||||
servername = unique name for node (I use hostname)
|
||||
|
||||
``` bash
|
||||
curl -sfL https://get.k3s.io | K3S_TOKEN="YOURTOKEN" K3S_URL="https://[your server]:6443" K3S_NODE_NAME="servername" sh -z
|
||||
```shell
|
||||
curl -sfL https://get.k3s.io | K3S_TOKEN="YOURTOKEN" K3S_URL="https://[your server]:6443" K3S_NODE_NAME="servername" sh -z
|
||||
|
||||
# I used
|
||||
apt install curl -y && curl -sfL https://get.k3s.io | K3S_TOKEN="YOURTOKEN" K3S_URL="https://[your server]:6443" K3S_NODE_NAME="servername" sh -z
|
||||
# I used
|
||||
apt install curl -y && curl -sfL https://get.k3s.io | K3S_TOKEN="YOURTOKEN" K3S_URL="https://[your server]:6443" K3S_NODE_NAME="servername" sh -z
|
||||
```
|
||||
|
||||
Sadly this is where my notes ended as, although the install worked, all of the system pods were failing and thus I moved on to the method listed above.
|
||||
|
||||
Reference in New Issue
Block a user