gitignore should have caught that
Some checks failed
continuous-integration/drone/push Build is failing
Some checks failed
continuous-integration/drone/push Build is failing
This commit is contained in:
parent
c51773bc46
commit
a6aef4af05
2
.gitignore
vendored
2
.gitignore
vendored
@ -5,7 +5,7 @@
|
||||
hugo_stats.json
|
||||
|
||||
# Temporary lock file while building
|
||||
/.hugo_build.lock
|
||||
.hugo_build.lock
|
||||
|
||||
# Files for testing pagination
|
||||
**/example copy*
|
||||
|
||||
291
content/posts/podman-quadlets-and-really-tall-trees.md
Normal file
291
content/posts/podman-quadlets-and-really-tall-trees.md
Normal file
@ -0,0 +1,291 @@
|
||||
---
|
||||
title = "Podman, quadlets, and really tall trees"
|
||||
date = 2026-01-04T16:32:07+10:30
|
||||
lastmod =
|
||||
draft: false
|
||||
---
|
||||
|
||||
Over this last Christmas break I had nothing but time as I was stuck down with
|
||||
covid. All my Christmas and new years plans were abruptly cancelled or moved and
|
||||
I was bed-ridden for a week. So, I decided to make good on a promise that I made
|
||||
myself some time ago to refresh my homelab. It's something I've been meaning to
|
||||
do for a while as I had some 13 different VMs some of which running a single
|
||||
service, many of which could have simply been containers instead. And now
|
||||
I finally had the time to commit to such an endeavour.
|
||||
|
||||
One of the virtual machines in my homelab (I named it bristlecone for it's age
|
||||
and gnarled nature) has been around as long as I've been playing with linux. At
|
||||
some point in 2016 I took the old family desktop and installed Ubuntu server on
|
||||
it. Originally I only used it as a web server, but as time went on I installed
|
||||
ownCloud, which I later upgraded to Nextcloud, then I added this service, then
|
||||
I added that. Before long I wanted more than just a single logical machine to
|
||||
play with. My solution was to get a new HDD, install Proxmox on it, and then
|
||||
image the original Ubuntu installation onto a virtual disk to run as a VM. This
|
||||
was probably a year or so after the initial install, for context. So, for the
|
||||
past 9 years (8 as a virtual machine) this Ubuntu server installation has trudged
|
||||
along. A port forward pointing port 80 and 443 at this machine has existed in at
|
||||
least 5 different routers as I've moved around.
|
||||
|
||||
Yesterday, I shut it down.
|
||||
|
||||
I've managed to pull all but a couple of services in my homelab onto a single
|
||||
machine to be run as [podman
|
||||
quadlets](https://docs.podman.io/en/latest/markdown/podman-systemd.unit.5.html),
|
||||
a ingenious little piece of technology I learnt about from a lovely French
|
||||
Canadian colleague of mine recently. I had used docker in the past to run
|
||||
services like Bitwarden (vaultwarden) and Gitlab but, although I'd heard of it,
|
||||
I'd never used podman. I had certainly never come across quadlets anyhow. So
|
||||
I dove in, starting with the previously linked documentation (which is
|
||||
a wonderous resource if you're planning on doing anything like what I've done),
|
||||
and then with some good old fashioned googling. I figured that I could simply as
|
||||
I pleased, set up quatlets for the services that I wanted, and then migrate my
|
||||
old running instances across to the new containers. And that's basically how it
|
||||
went down.
|
||||
|
||||
I'd like to outline the basic premise, how I went about setting everything up,
|
||||
and some of the tips and tricks I learnt along the way. Also I want to be clear
|
||||
that my exposure to podman has been almost solely through quadlets. I have had
|
||||
to interact with the podman cli somewhat throughout this but primarily I've been
|
||||
defining configs for quadlets in files and then using `systemd` to orchestrate
|
||||
them.
|
||||
|
||||
I'm running on podman version 5.4.1, which is the version available in the
|
||||
Ubuntu 25.04. I have 24.04 installed and will until 26.04 is released as I want
|
||||
to tie myself only to LTS releases. However, 24.04 has podman 4.x.x which is
|
||||
missing some substantial feature development for quadlets as quadlets are still
|
||||
under active development. To solve this problem I found and reviewed a script,
|
||||
see `plucky-pinning.sh` below, which pins podman and a collection of
|
||||
dependencies to the 25.04 releases.
|
||||
|
||||
One of the features that I was particularly interested in was podman quadlets,
|
||||
as I'm sure is clear. A quadlet is a `systemd` unit which defines a container or
|
||||
some other piece of podman infrastructure (network, pod, volume, etc). These
|
||||
unit files can then be symlinked into a set of predefined locations and
|
||||
`systemd`
|
||||
can then be used to start, stop, inspect, etc, them. You can even use `journalctl`
|
||||
to view logs. The primary ways I set up services were through standalone
|
||||
containers, and pods. Pods, which I was under the impression I needed podman v5+
|
||||
for, are a way to logically group containers and make it easier to manage groups
|
||||
of containers supporting the same service. The general idea is, as far as
|
||||
I engaged with it, that you create a `my-service.pod` file in which you can
|
||||
specify your network and published ports, among other things. Again,
|
||||
I pushed this as far as networking as that is all I needed, I'm sure there
|
||||
is more to learn here.
|
||||
|
||||
For example, the following is the definition for my
|
||||
[Planka](https://planka.app/) service that I'm running.
|
||||
|
||||
`planka.pod`:
|
||||
```
|
||||
[Pod]
|
||||
PodName=planka
|
||||
Network=planka.network
|
||||
PublishPort=1337:1337
|
||||
```
|
||||
|
||||
`planka-server.container`:
|
||||
```
|
||||
[Unit]
|
||||
Description=Planka - Server
|
||||
Requires=planka-db.service
|
||||
After=planka-db.service
|
||||
|
||||
[Container]
|
||||
Pod=planka.pod
|
||||
ContainerName=planka-server
|
||||
Image=ghcr.io/plankanban/planka:2.0.0-rc.4
|
||||
|
||||
Environment=BASE_URL=planka.example.com
|
||||
Environment=DATABASE_URL=postgresql://<planka-db-user>:<planka-db-user-password>@planka-db:5432/<planka-db-name>
|
||||
# this is the host name of the db container. ^
|
||||
Environment=SECRET_KEY=<64-character-secret-key>
|
||||
|
||||
Volume=/<data-location>/favicons:/app/public/favicons
|
||||
Volume=/<data-location>/user-avatars:/app/public/user-avatars
|
||||
Volume=/<data-location>/background-images:/app/public/background-images
|
||||
Volume=/<data-location>/attachments:/app/private/attachments
|
||||
|
||||
[Service]
|
||||
Restart=always
|
||||
TimeoutStartSec=300
|
||||
|
||||
[Install]
|
||||
WantedBy=default.target
|
||||
|
||||
```
|
||||
|
||||
`planka-db.container`:
|
||||
```
|
||||
[Unit]
|
||||
Description=Planka - DB
|
||||
|
||||
[Container]
|
||||
Pod=planka.pod
|
||||
ContainerName=planka-db
|
||||
Image=docker.io/postgres:16-alpine
|
||||
|
||||
Environment=POSTGRES_PASSWORD=<planka-db-user-password>
|
||||
Environment=POSTGRES_USER=<planka-db-user>
|
||||
Environment=POSTGRES_DB=<planka-db-name>
|
||||
|
||||
Volume=/<data-location>/postgresql:/var/lib/postgresql/data
|
||||
Volume=/etc/timezone:/etc/timezone:ro
|
||||
Volume=/etc/localtime:/etc/localtime:ro
|
||||
|
||||
...
|
||||
<homepage-configuration>
|
||||
...
|
||||
|
||||
[Service]
|
||||
Restart=always
|
||||
TimeoutStartSec=300
|
||||
|
||||
[Install]
|
||||
WantedBy=default.target
|
||||
```
|
||||
|
||||
`planka-network.container`:
|
||||
```
|
||||
[Unit]
|
||||
Description=Planka network
|
||||
After=network-online.target
|
||||
|
||||
[Network]
|
||||
NetworkName=planka-network
|
||||
Subnet=10.1.0.0/24
|
||||
Gateway=10.1.0.1
|
||||
DNS=
|
||||
|
||||
[Install]
|
||||
WantedBy=default.target
|
||||
```
|
||||
|
||||
Thees files were soft linked with `ln -s <original> <soft-link>` into the
|
||||
`$HOME/.config/containers/systemd` directory. After which they could be started
|
||||
with `systemd` commands.
|
||||
|
||||
While setting all this up I ended up using some commands a lot, for instance
|
||||
`systemctl --user restart <unit-name>` so I ended up creating a list of aliases
|
||||
to help with this, these are for `bash`:
|
||||
|
||||
```bash
|
||||
alias \
|
||||
usta="systemctl --user start" \
|
||||
usto="systemctl --user stop" \
|
||||
ures="systemctl --user restart" \
|
||||
ustatus="systemctl --user status" \
|
||||
ureload="systemctl --user daemon-reload" \
|
||||
usvc="systemctl --user --type=service" \
|
||||
```
|
||||
|
||||
One of the services I'm running is [homepage](https://gethomepage.dev/) which
|
||||
integrates with podman to get container information directly. You can specify
|
||||
homepage related data as Labels, this is not dissimilar to how one would do the
|
||||
same with docker I believe. For instance, the following is what I had for
|
||||
Planka:
|
||||
|
||||
```
|
||||
Label=homepage.group=Productivity
|
||||
Label=homepage.name=Planka
|
||||
Label=homepage.icon=planka.png
|
||||
Label=homepage.href=planka.example.com
|
||||
Label=homepage.description="Kanban Board"
|
||||
```
|
||||
|
||||
This makes it very easy to keep track of your services, all in one place, each
|
||||
service defining its important data itself.
|
||||
|
||||
In my setup I had all my containers running in userspace as rootless containers.
|
||||
The only time this became an issue was when a container needed access to the
|
||||
podman (or docker) socket. This usually requires root privileges. Fortunately,
|
||||
podman comes with a solution to this problem.
|
||||
|
||||
You can make a userspace version of the socket available with the followig
|
||||
commands:
|
||||
|
||||
```bash
|
||||
systemctl --user enable podman.socket
|
||||
|
||||
# check with
|
||||
systemctl --user status podman.socket
|
||||
```
|
||||
|
||||
The socket should then be available to be mounted as in the following:
|
||||
|
||||
```
|
||||
Volume=/run/user/1000/podman/podman.sock:/run/podman/podman.sock
|
||||
|
||||
```
|
||||
|
||||
In my case my user has the uid 1000.
|
||||
|
||||
All in all, I am very happy with how this turned out. I have all my original
|
||||
services setup as quadlets and their data migrated across (with basically no
|
||||
issues at all, Nexcloud was the worst but nothing broke), and I have a host of
|
||||
new services to try out. It's going to be very simple for me to add services in
|
||||
the future, and manage those that I have set up and I'm very happy about it.
|
||||
|
||||
There are however a couple of caveats to this setup. The first is that every
|
||||
container, thus every service, is tied to the same machine. Admittedly, that
|
||||
machine has a low upgrade surface area so updates and reboots should be
|
||||
infrequent, but regardless, when the machine goes down, so do ALL of the
|
||||
services. I see this as an acceptable price to pay for the ease of setup and
|
||||
administration though. Secondly, is backups. I have setup
|
||||
a [`restic`](https://github.com/restic/restic) profile for the data diretories
|
||||
of the containers which I run as a cronjob every week but for instance, an image
|
||||
of a running database is not a backup. I'm going to need to investigate how to
|
||||
properly automate backups of this data using a method that doesn't leave room
|
||||
for error. That being said, all of the configuration for the containers (to get
|
||||
them set up and running) is stored in a git repository, and most of the data
|
||||
is contained in the files one way or another which is all being backed up as
|
||||
well. So, I'm not overly worried right now.
|
||||
|
||||
The machine that all of my new services ended up on has been named rimu as an
|
||||
ode to my kiwi heritage and the fact that now instead of having multiple small
|
||||
VMs hosting separate services, I now have one towering VM to host all my
|
||||
services.
|
||||
|
||||
> P.S. In the future, I will migrate the containers to using environment files
|
||||
> and
|
||||
then publish a sanitised copy of the repository for public perusal.
|
||||
|
||||
- plucky-pinning.sh
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
# Must be run as root
|
||||
if [ "$EUID" -ne 0 ]; then
|
||||
echo "Please run as root (e.g., sudo $0)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Define file paths
|
||||
PINNING_FILE="/etc/apt/preferences.d/podman-plucky.pref"
|
||||
SOURCE_LIST="/etc/apt/sources.list.d/plucky.list"
|
||||
|
||||
# Write Plucky APT source list
|
||||
echo "Adding Plucky repo to $SOURCE_LIST..."
|
||||
echo "deb http://archive.ubuntu.com/ubuntu plucky main universe" > "$SOURCE_LIST"
|
||||
|
||||
# Write APT pinning rules
|
||||
echo "Writing APT pinning rules to $PINNING_FILE..."
|
||||
cat <<EOF > "$PINNING_FILE"
|
||||
Package: podman buildah golang-github-containers-common crun libgpgme11t64 libgpg-error0 golang-github-containers-image catatonit conmon containers-storage
|
||||
Pin: release n=plucky
|
||||
Pin-Priority: 991
|
||||
|
||||
Package: libsubid4 netavark passt aardvark-dns containernetworking-plugins libslirp0 slirp4netns
|
||||
Pin: release n=plucky
|
||||
Pin-Priority: 991
|
||||
|
||||
Package: *
|
||||
Pin: release n=plucky
|
||||
Pin-Priority: 400
|
||||
EOF
|
||||
|
||||
# Update APT cache
|
||||
echo "Updating APT package list..."
|
||||
apt update
|
||||
|
||||
echo "Plucky pinning setup complete."
|
||||
```
|
||||
Loading…
Reference in New Issue
Block a user