updates to theme and code syntax highlighting

This commit is contained in:
Solomon Laing 2022-11-20 16:45:28 +10:30
parent fc1ac99e68
commit d1e7403d19
5 changed files with 180 additions and 108 deletions

View File

@ -6,81 +6,115 @@ draft: false
## Intro ## Intro
I have had a little kubernetes cluster up and running for some time on a collection of Odroid MC1's and an N2. However, recently (I mean not so recently now but for the sake of the story) I attempted to update it and all went wrong, I got rancher set up on it but nothing, and I mean NOTHING, is compiled for armhv7. So, I gave up. I've actually now put linux mint on the N2 and use it as a home theater PC which works quite well. I have had a little kubernetes cluster up and running for some time on a
collection of Odroid MC1's and an N2. However, recently (I mean not so recently
now but for the sake of the story) I attempted to update it and all went wrong,
I got rancher set up on it but nothing, and I mean NOTHING, is compiled for
armhv7. So, I gave up. I've actually now put linux mint on the N2 and use it as
a home theater PC which works quite well.
After some time, and lots of forgetting, I came across an auction (for the locals [Mason Grey Strange](https://mgs.net.au)) that runs once a month selling old and new IT equipment. In the auction I found a set of HP-t620s which were being sold in lots of 5 for pittance. Seeing them reminded me of my original cluster so I bought them to replace it. After some time, and lots of forgetting, I came across an auction (for the
locals [Mason Grey Strange](https://mgs.net.au)) that runs once a month selling
old and new IT equipment. In the auction I found a set of HP-t620s which were
being sold in lots of 5 for pittance. Seeing them reminded me of my original
cluster so I bought them to replace it.
## The Setup ## The Setup
Unfortunately, I didn't actually record the process of setting up rancher, however, given that it was being set up on standard x86_64 architecture one could pick any of the myriad of guides for setting up rancher (which I set up in a VM on my server) with a k3s cluster (the t620s), which is what I did. Unfortunately, I didn't actually record the process of setting up rancher,
however, given that it was being set up on standard x86_64 architecture one
could pick any of the myriad of guides for setting up rancher (which I set up in
a VM on my server) with a k3s cluster (the t620s), which is what I did.
![Current Rancher Dashboard](https://gitlab.inkletblot.com/inkletblot/simple-blog-api/-/raw/master/assets/rancher.png) ![Current Rancher Dashboard](https://gitlab.inkletblot.com/inkletblot/simple-blog-api/-/raw/master/assets/rancher.png)
## It's Use ## It's Use
For a period of time I didn't set up anything on it. I did integrate it with my gitlab server, but integration was as far as I took it, I didn't bother with setting up any CI/CD related things. For a period of time I didn't set up anything on it. I did integrate it with my
gitlab server, but integration was as far as I took it, I didn't bother with
setting up any CI/CD related things.
However, I quickly realised that using it to replace my existing CI/CD solution for the couple of projects I've got it set up on would be a fun little undertaking at probably simplify things for me. However, I quickly realised that using it to replace my existing CI/CD solution
for the couple of projects I've got it set up on would be a fun little
undertaking at probably simplify things for me.
> As an aside, I'm yet to put much thought into security for my server or the applications I run on it, I use basic best practice however, my old CI/CD solution broke most of them, and more. So using the k3s cluster was definitely an upgrade there. > As an aside, I'm yet to put much thought into security for my server or the
> applications I run on it, I use basic best practice however, my old CI/CD
> solution broke most of them, and more. So using the k3s cluster was definitely
> an upgrade there.
As such, that is what I did. As such, that is what I did.
Not only is the API for this website running on it, all my gitlab runners run on it now. The setup was very simple, with most of it being standard k3s/k8s and gitlab integration configuration. All in all it greatly simplified my gitlab setup and I'm very happy with it. Not only is the API for this website running on it, all my gitlab runners run on
it now. The setup was very simple, with most of it being standard k3s/k8s and
gitlab integration configuration. All in all it greatly simplified my gitlab
setup and I'm very happy with it.
When I started writing this I had a much grander vision for the end result, however, it really isn't that grand. The final solution for the CI/CD can be found [here](). Using helm simplified things greatly. When I started writing this I had a much grander vision for the end result,
however, it really isn't that grand. The final solution for the CI/CD can be
found [here](). Using helm simplified things greatly.
## Extras ## Extras
I don't have a comprehensive list of useful things and stuff I did while implementing all of this. I did it some time ago and didn't note much of it down. However, I will try and list of some of the bits and pieces I have. I don't have a comprehensive list of useful things and stuff I did while
implementing all of this. I did it some time ago and didn't note much of it
down. However, I will try and list of some of the bits and pieces I have.
### Deployment.yaml ### Deployment.yaml
This is used by helm as part of the CI/CD pipeline for deploying the app (I believe, it's been a while and I could be wrong). This is used by helm as part of the CI/CD pipeline for deploying the app (I
believe, it's been a while and I could be wrong).
apiVersion: apps/v1 ```yaml
kind: Deployment apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-blog-api
labels:
app: python
spec:
replicas: 3
selector:
matchLabels:
app: python
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 33%
template:
metadata: metadata:
name: simple-blog-api
labels: labels:
app: python app: python
spec: spec:
replicas: 3 containers:
selector: - name: python
matchLabels: image: inkletblot/simple-blog-api:<VERSION>
app: python ports:
strategy: - containerPort: 5000
type: RollingUpdate livenessProbe:
rollingUpdate: httpGet:
maxSurge: 1 path: /posts
maxUnavailable: 33% port: 5000
template: initialDelaySeconds: 2
metadata: periodSeconds: 2
labels: readinessProbe:
app: python httpGet:
spec: path: /posts
containers: port: 5000
- name: python initialDelaySeconds: 2
image: inkletblot/simple-blog-api:<VERSION> periodSeconds: 2
ports: ```
- containerPort: 5000
livenessProbe:
httpGet:
path: /posts
port: 5000
initialDelaySeconds: 2
periodSeconds: 2
readinessProbe:
httpGet:
path: /posts
port: 5000
initialDelaySeconds: 2
periodSeconds: 2
### gitlab-ci.yml ### gitlab-ci.yml
This is the gitlab-ci configuration I'm using. It integrates with docker (as I couldn't get a local registry working with my current setup). I'll be entirely honest here, I don't exactly know how the helm integration works, I believe it works magically in the background using the variables supplied to the pipeline through gitlab. This is the gitlab-ci configuration I'm using. It integrates with docker (as I
couldn't get a local registry working with my current setup). I'll be entirely
honest here, I don't exactly know how the helm integration works, I believe it
works magically in the background using the variables supplied to the pipeline
through gitlab.
```yaml
image: docker:20.10.5 image: docker:20.10.5
stages: stages:
@ -144,23 +178,18 @@ This is the gitlab-ci configuration I'm using. It integrates with docker (as I c
# name: Production # name: Production
# url: "$LIVE_SERVER_FQDN" # url: "$LIVE_SERVER_FQDN"
# before_script: # before_script:
# - 'command -v ssh-agent >/dev/null || ( apt-get # - 'command -v ssh-agent >/dev/null || ( apt-get update -y && apt-get install openssh-client -y)'
update -y && apt-get install openssh-client -y)'
# - eval $(ssh-agent -s) # - eval $(ssh-agent -s)
# - echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - # - echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
# - mkdir -p ~/.ssh # - mkdir -p ~/.ssh
# - chmod 700 ~/.ssh # - chmod 700 ~/.ssh
# - '[[ -f /.dockerenv ]] && echo -e "Host # - '[[ -f /.dockerenv ]] && echo -e "Host*\n\tStrictHostKeyChecking no\n\n" >> ~/.ssh/config'
*\n\tStrictHostKeyChecking no\n\n" >> ~/.ssh/config'
# script: # script:
# - echo 'sed 's/"$CI_REGISTRY_IMAGE".*/ # - echo 'sed 's/"$CI_REGISTRY_IMAGE".*/ "$CI_REGISTRY_IMAGE":"$CI_COMMIT_SHA"''
"$CI_REGISTRY_IMAGE":"$CI_COMMIT_SHA"''
# - ssh -J "$PROD_SERVER_USER"@"$LIVE_SERVER_FQDN" # - ssh -J "$PROD_SERVER_USER"@"$LIVE_SERVER_FQDN"
"$PROD_SERVER_USER"@"$PROD_SERVER_LOCAL_HOST_NAME" "cd # "$PROD_SERVER_USER"@"$PROD_SERVER_LOCAL_HOST_NAME" "cd simple-blog-api && sed
simple-blog-api && sed -i 's/simple-blog-api.*/ # -i 's/simple-blog-api.*/ simple-blog-api:"$CI_COMMIT_SHA"\x27/'
simple-blog-api:"$CI_COMMIT_SHA"\x27/' docker-compose. # docker-compose.yml && docker-compose up -d --remove-orphans --force-recreate"
yml && docker-compose up -d --remove-orphans
--force-recreate"
# Helm Deploy # Helm Deploy
# Currently in use, works with *Magic* # Currently in use, works with *Magic*
@ -178,7 +207,10 @@ This is the gitlab-ci configuration I'm using. It integrates with docker (as I c
devops/simple-blog-api devops/simple-blog-api
environment: environment:
name: production name: production
```
That is about all I can come up with at the moment. I hope it is useful for someone.
That is about all I can come up with at the moment. I hope it is useful for
someone.
-ink -ink

View File

@ -1,21 +0,0 @@
---
title: "Exmaple"
date: 2021-06-02
draft: false
---
## Test Post
This is an example of rendering a page of mostly styled html from markdown
I'm *hoping* that this works.
#!python
# Some python script would look like this
print("hello world.")
You can also have code inline like `this`.
[Here](https://www.example.com) is a link for good measure.

View File

@ -8,11 +8,18 @@ draft: false
Say hello to my little odroid kubernetes cluster project. Say hello to my little odroid kubernetes cluster project.
Details are: 5 x Odroid-MC1's as worker nodes. All up 10gb RAM and 40 cores. 1 x Odroid-N2 as master node. Plus 5v 20a power supply and Netgear GS108. Details are: 5 x Odroid-MC1's as worker nodes. All up 10gb RAM and 40 cores. 1 x
Odroid-N2 as master node. Plus 5v 20a power supply and Netgear GS108.
Got parts as a Christmas present to myself some years ago with no plan as to what I was going to with them. While putting it together I though I could use it to learn Kubernetes as I've seen others do similar things. Got parts as a Christmas present to myself some years ago with no plan as to
what I was going to with them. While putting it together I though I could use it
to learn Kubernetes as I've seen others do similar things.
I followed [this](https://www.digitalocean.com/community/tutorials/how-to-create-a-kubernetes-cluster-using-kubeadm-on-ubuntu-18-04) tutorial loosely and it's all working wonderfully. It's cool to be introduced to ansible, which I had heard about but not used before. I'm currently developing some apps that I'd like to run on it. I followed
[this](https://www.digitalocean.com/community/tutorials/how-to-create-a-kubernetes-cluster-using-kubeadm-on-ubuntu-18-04)
tutorial loosely and it's all working wonderfully. It's cool to be introduced to
ansible, which I had heard about but not used before. I'm currently developing
some apps that I'd like to run on it.
![My odroid MC1 and N2 cluster](https://gitlab.inkletblot.com/inkletblot/simple-blog-api/-/raw/master/assets/odroid-cluster-2.jpg) ![My odroid MC1 and N2 cluster](https://gitlab.inkletblot.com/inkletblot/simple-blog-api/-/raw/master/assets/odroid-cluster-2.jpg)
@ -20,12 +27,19 @@ I followed [this](https://www.digitalocean.com/community/tutorials/how-to-create
## Update Mid 2021 ## Update Mid 2021
I decided to totally rebuild this having never been able to get Rancher set up on it. See the details of that [here](https://www.inkletblot.com/projects/k3s-on-odroid-mc1s-a-guide) I decided to totally rebuild this having never been able to get Rancher set up
on it. See the details of that
[here](https://www.inkletblot.com/projects/k3s-on-odroid-mc1s-a-guide)
-ink -ink
## Update 2021-12-21 ## Update 2021-12-21
I have sadly now decomissioned this little cluster. The five MC1 units are currently unused and the N2 is now my media PC in my bedroom. Not all is sad though, I decomissiod it because I replaced it with a bigger and better setup using old HP t620 thin clients, of which I got 5 for a rediculously good price at a local IT liquidation auction. See more [here](https://www.inkletblot.com/projects/an-old-new-upgrade-to-kube) I have sadly now decomissioned this little cluster. The five MC1 units are
currently unused and the N2 is now my media PC in my bedroom. Not all is sad
though, I decomissiod it because I replaced it with a bigger and better setup
using old HP t620 thin clients, of which I got 5 for a rediculously good price
at a local IT liquidation auction. See more
[here](https://www.inkletblot.com/projects/an-old-new-upgrade-to-kube)
-ink -ink

View File

@ -8,19 +8,30 @@ draft: true
## Icons, SVGs, Fonts, and Icomoon. ## Icons, SVGs, Fonts, and Icomoon.
Early in the year I, for a project at work, was tasked with coming up with a solution to deal with icons. We had decided to use FluentUI icons from microsoft, but the 2Mb font file that they provide was a bit much to include in the app. So a solution was required. We wanted our solution to be easily versioned and source controlled. Early in the year I, for a project at work, was tasked with coming up with a
solution to deal with icons. We had decided to use FluentUI icons from
microsoft, but the 2Mb font file that they provide was a bit much to include in
the app. So a solution was required. We wanted our solution to be easily
versioned and source controlled.
After some time looking at different methods of processing font files to either parse out unwanted fonts or transpose wanted ones to a new file I realised that it was a pointless endeavour and I called it quits. At least from what I saw, fonts suck and I don't want anything to do with them. After some time feeling sorry for myself for a bit I decided to have a look at how primetek deal with their icons prime-icons, which we had considered for use but then abandoned. After some time looking at different methods of processing font files to either
parse out unwanted fonts or transpose wanted ones to a new file I realised that
it was a pointless endeavour and I called it quits. At least from what I saw,
fonts suck and I don't want anything to do with them. After some time feeling
sorry for myself for a bit I decided to have a look at how primetek deal with
their icons prime-icons, which we had considered for use but then abandoned.
> an ironic note is that I end at the solution primetek use - as far as I can see - but not right away. > an ironic note is that I end at the solution primetek use - as far as I can
> see - but not right away.
### A potential solution ### A potential solution
I noticed the use of some clever CSS/font work, creating a tag and prepending it with a specific character of a font, which just happens to be an icon. I noticed the use of some clever CSS/font work, creating a tag and prepending it
with a specific character of a font, which just happens to be an icon.
The basics of the solution are below. The basics of the solution are below.
#!scss ```scss
@charset "UTF-8"; @charset "UTF-8";
@font-face { @font-face {
font-family: "icomoon"; font-family: "icomoon";
@ -54,20 +65,36 @@ The basics of the solution are below.
.my-icons-icon-2:before { .my-icons-icon-2:before {
content: ""; content: "";
} }
```
Then to insert the icon where it is wanted one can simply do the following Then to insert the icon where it is wanted one can simply do the following
`<i class="my-icons-icon-1"></i>` ```html
<i class="my-icons-icon-1"></i>
````
And the icon specified as content in the scss above will sit in the place of the `<i>` tag. And the icon specified as content in the scss above will sit in the place of the
`<i>` tag.
### Icomoon ### Icomoon
This lead me to looking at solutions for converting a collection of SVGs to a singular font file. Microsoft provides SVG copies of all of the fluent icons on github so we had easy access to all the icons we wanted in that format. Turns out this is not an easy task to do without a large amount of work. Originally we had wanted an in house solution to complete this conversion. But alas. This lead me to looking at solutions for converting a collection of SVGs to a
singular font file. Microsoft provides SVG copies of all of the fluent icons on
github so we had easy access to all the icons we wanted in that format. Turns
out this is not an easy task to do without a large amount of work. Originally we
had wanted an in house solution to complete this conversion. But alas.
Realising that completing the conversion (on a short timescale without a lot of work) was totally unrealistic, I started looking at online solutions and came across Icomoon. This was after quite some searching throughout which I never heard it mentioned. I'm not sure which search term found it but I know that it was a direct search that did and not an article or forum post that lead me to it. Realising that completing the conversion (on a short timescale without a lot of
work) was totally unrealistic, I started looking at online solutions and came
across Icomoon. This was after quite some searching throughout which I never
heard it mentioned. I'm not sure which search term found it but I know that it
was a direct search that did and not an article or forum post that lead me to
it.
Upon uploading some test icons and downloading the font file I realised that I had stumbled upon the very same solution that primetek was using for prime-icons, give or take some small changes that they have made to make things better/easier for their users. Upon uploading some test icons and downloading the font file I realised that I
had stumbled upon the very same solution that primetek was using for
prime-icons, give or take some small changes that they have made to make things
better/easier for their users.
### The end ### The end
@ -75,6 +102,10 @@ And with that, I had found our solution.
Icomoon is really a magical black box but what it produces is brilliant. Icomoon is really a magical black box but what it produces is brilliant.
Source controlling the processed files and building the font and css into an npm package that we store in a private registry worked great and we have been using the same solution since, approx. 8 months I think. If ever you need a simple way of producing your own set of easily usable icons for a project, I suggest Icomoon. Source controlling the processed files and building the font and css into an npm
package that we store in a private registry worked great and we have been using
the same solution since, approx. 8 months I think. If ever you need a simple way
of producing your own set of easily usable icons for a project, I suggest
Icomoon.
-ink. -ink.

View File

@ -36,7 +36,7 @@ Both network and hostname can be setup by mounting rootfs and manually editing/a
Example netplan '10-config.yaml': Example netplan '10-config.yaml':
#!yaml ```yaml
network: network:
version: 2 version: 2
renderer: networkd renderer: networkd
@ -47,10 +47,13 @@ Example netplan '10-config.yaml':
nameservers: nameservers:
addresses: [192.168.0.1, 1.1.1.1] addresses: [192.168.0.1, 1.1.1.1]
search: [mydomain] search: [mydomain]
```
Also set timezone if you want. Also set timezone if you want.
`sudo timedatectl set-timezone Australia/Adelaide` ```shell
sudo timedatectl set-timezone Australia/Adelaide
```
In my case the following was used: In my case the following was used:
@ -67,13 +70,15 @@ We must rebuild kernel with updated options so that cgroup_pids is enabled. Hard
note that the following tools are required for the build: bison, flex, libssl-dev, and bc note that the following tools are required for the build: bison, flex, libssl-dev, and bc
`apt install bison flex libssl-dev bc -y` ```shell
apt install bison flex libssl-dev bc -y
```
### The K3S install ### The K3S install
Run the following on all nodes: Run the following on all nodes:
#!shell ```shell
iptables -F \ iptables -F \
&& update-alternatives --set iptables /usr/sbin/iptables-legacy \ && update-alternatives --set iptables /usr/sbin/iptables-legacy \
&& update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy \ && update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy \
@ -89,10 +94,11 @@ Run the following on all nodes:
# Be sure that the firewall is disabled for ease # Be sure that the firewall is disabled for ease
ufw disable ufw disable
```
Then run the following only on the master node: Then run the following only on the master node:
#!shell ```shell
# for master # for master
curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" sh -s - --docker curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" sh -s - --docker
@ -102,22 +108,25 @@ Then run the following only on the master node:
# Get token from master, make sure to store it somewhere # Get token from master, make sure to store it somewhere
cat /var/lib/rancher/k3s/server/node-token cat /var/lib/rancher/k3s/server/node-token
```
Then run the following on the worker nodes, updating the command for each: Then run the following on the worker nodes, updating the command for each:
#!shell ```shell
# for workers # for workers
# Fill this out ... # Fill this out ...
curl -sfL http://get.k3s.io | K3S_URL=https://<master_IP>:6443 K3S_TOKEN=<join_token> curl -sfL http://get.k3s.io | K3S_URL=https://<master_IP>:6443 K3S_TOKEN=<join_token>
K3S_NODE_NAME="odroid-mc1-X" sh -s - --docker K3S_NODE_NAME="odroid-mc1-X" sh -s - --docker
systemctl status k3s-agent systemctl status k3s-agent
```
And thus you should be done, check the master node to see: And thus you should be done, check the master node to see:
#!shell ```shell
# Check node was added on master # Check node was added on master
kubectl get nodes kubectl get nodes
```
And all should be up and running correctly, it was for me at least. And all should be up and running correctly, it was for me at least.
@ -135,7 +144,7 @@ Once these have been set up with ip addresses and hostnames (odroid-n2, odroid-m
Either the following to set up users and access: Either the following to set up users and access:
#!yml ```yaml
- hosts: all - hosts: all
become: yes become: yes
tasks: tasks:
@ -152,11 +161,12 @@ Either the following to set up users and access:
authorized_key: user=kuber key="{{item}}" authorized_key: user=kuber key="{{item}}"
with_file: with_file:
- ~/.ssh/id_rsa.pub - ~/.ssh/id_rsa.pub
```
Or if you already set up users: Or if you already set up users:
#!yml ```yaml
- hosts: all - hosts: all
become: yes become: yes
tasks: tasks:
@ -164,9 +174,11 @@ Or if you already set up users:
authorized_key: user=root key="{{item}}" authorized_key: user=root key="{{item}}"
with_file: with_file:
- ~/.ssh/id_rsa.pub - ~/.ssh/id_rsa.pub
```
The above can be used with a hosts file such as the following The above can be used with a hosts file such as the following
```
[masters] [masters]
master ansible_host=192.168.0.XXX ansible_user=<user> ansible_ssh_pass=<password> master ansible_host=192.168.0.XXX ansible_user=<user> ansible_ssh_pass=<password>
@ -177,14 +189,16 @@ The above can be used with a hosts file such as the following
[all:vars] [all:vars]
ansible_python_interpreter=/usr/bin/python3 ansible_python_interpreter=/usr/bin/python3
```
Then the following commands: Then the following commands:
#!shell ```shell
sudo iptables -F \ sudo iptables -F \
&& sudo update-alternatives --set iptables /usr/sbin/iptables-legacy \ && sudo update-alternatives --set iptables /usr/sbin/iptables-legacy \
&& sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy \ && sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy \
&& sudo reboot && sudo reboot
```
useful command formatted from step 2.2.1 of reference material [here](https://learn.networkchuck.com/courses/take/ad-free-youtube-videos/lessons/26093614-i-built-a-raspberry-pi-super-computer-ft-kubernetes-k3s-cluster-w-rancher) useful command formatted from step 2.2.1 of reference material [here](https://learn.networkchuck.com/courses/take/ad-free-youtube-videos/lessons/26093614-i-built-a-raspberry-pi-super-computer-ft-kubernetes-k3s-cluster-w-rancher)
@ -197,20 +211,22 @@ Then on the master node grab its node token
`sudo cat /var/lib/rancher/k3s/server/node-token` `sudo cat /var/lib/rancher/k3s/server/node-token`
Then run the following on each of the workers: Then run the following on each of the workers:
(note in my case curl was not installed) (note in my case curl was not installed)
[your server] = master node ip - [your server] = master node ip
YOURTOKEN = token from above - YOURTOKEN = token from above
servername = unique name for node (I use hostname) - servername = unique name for node (I use hostname)
#!shell ```shell
curl -sfL https://get.k3s.io | K3S_TOKEN="YOURTOKEN" curl -sfL https://get.k3s.io | K3S_TOKEN="YOURTOKEN"
K3S_URL="https://[your server]:6443" K3S_NODE_NAME="servername" sh -z K3S_URL="https://[your server]:6443" K3S_NODE_NAME="servername" sh -z
# I used # I used
apt install curl -y && curl -sfL https://get.k3s.io | apt install curl -y && curl -sfL https://get.k3s.io |
K3S_TOKEN="YOURTOKEN" K3S_URL="https://[your server]:6443" K3S_NODE_NAME="servername" sh -z K3S_TOKEN="YOURTOKEN" K3S_URL="https://[your server]:6443" K3S_NODE_NAME="servername" sh -z
```
Sadly this is where my notes ended as, although the install worked, all of the system pods were failing and thus I moved on to the method listed above. Sadly this is where my notes ended as, although the install worked, all of the system pods were failing and thus I moved on to the method listed above.