initial commit

This commit is contained in:
Solomon Laing 2026-01-30 06:50:32 +00:00
commit 89eff52d1b
123 changed files with 2848 additions and 0 deletions

189
README.md Normal file
View File

@ -0,0 +1,189 @@
# The Quad
This is my current setup for managing a good portion of my cloud infrastructure.
I tried to do this with ansible in the past bug I couldn't be bothered finishing
it all. This, I like better. It's more accessible and straightforward.
More info on [podman
quadlets](https://docs.podman.io/en/latest/markdown/podman-systemd.unit.5.html).
## Basics
This repo contains all container definitions that I use. More can be added and
then just need to be symlinked to the user `systemd` directory followed by
running `systemctl --user daemon-reload`, at which point the container should be
picked up and started.
```
cd .config/containers/systemd/
ln -s ~/repos/thequad/openhab/openhab.container .
systemctl --user daemon-reload
```
## Backups
I haven't set this up yet but I'm going to use `restic` to back up the data
directories to both my NAS and to Backblaze B2.
> More to come here...
## Notes
I've decided to store all of the container data in `/mnt/data/containers/` in a directory with
the containers name. This seemed most straightforward to me.
`plucky-pinning.sh` provides a way to automatically pin podman to the version
released in Ubuntu 25.05 (Plucky). The reason this is useful is that the podman
v4 -> v5 transition introduces a lot of nice to haves (such as Pods) because
podman quadlets are still in active development.
### Root containers
`./drone-agent.container` is not rootless as it needs access to the
podman/docker socket to run containers for actions/pipelines.
```
sudo ln -s ~/repos/thequad/drone/* /etc/containers/systemd/
```
### Rootless containers
If you need the socket in a rootless container, see
[here](https://github.com/gethomepage/homepage/discussions/4013#discussioncomment-12135538)
If you have issues binding ports; podman cannot create rootless containers that
bind to ports <= 1024 (see
[here](https://github.com/containers/podman/blob/main/rootless.md)), you can run
the following to update the systems settings to allow down to whaever you want
(80 here):
```
sudo sysctl net.ipv4.ip_unprivileged_port_start=80
```
__NOTE__ this is changing your host's settings, not podman's, BE CAREFUL!
### Homepage
This requires some additional configuration, namely updating the file in the
containers data directory referenced
[here](https://gethomepage.dev/configs/docker/#using-socket-directly) and
setting up the podman socket for rootless access, see above.
### ARR Suite
Currnt thoughts for getting media automations set up are:
Using: sonarr, radarr, prowlerr, profilerr, qbittorrent, and gluetun (connected
to protonvpn via a wireguard config)
Things to work out:
- networking, I'll want to connect be able to connect locally, can I do this
with the vpn handling container network traffic? how should I set up the pod,
will it even need a network file, or will the gluetun container be the
network?
Some references:
- [podman protonvpn and gluetun](https://beerstra.org/2024/07/12/vpn-enabled-podman-containers/)
- [docs](https://github.com/qdm12/gluetun-wiki/blob/main/setup/providers/protonvpn.md)
UPDATES:
I've set up usenet, I have sabnzbd for downloading and have switched off
qbittorrent, gluetun, and flaresolver.
### Unifi Network Application
Although easiest to migrate with a config export and import, I did need to
factory reset the devices, ssh into them with
`ssh ubnt@<device id>`
and the password of `ubnt`
and run
```
set-inform http://192.168.2.61:8080/inform
```
To resolve an issue where the devices were stuck in an infinite 'adopting'
state.
### Photoprism
UUUUUUUUHHHG, have to use CLI to add users, ffs.
```
photoprism users add -p your_password -r guest your_username
```
It does look like PP is the best option for self hosted though.
[Managing users reference
guide/docs](https://docs.photoprism.app/user-guide/users/cli/#managing-user-accounts)
I'm going to look into immich which also looks quite nice
### Authelia
I have set this up to provide auth if an when I need it, it's pretty simple to
integrate into caddy, look at the drone config in caddy for an example. More
complicated services like nextcloud might require more in depth configuration
but we'll cross that bridge when we get to it.
I'm not going to include any references here as there's extensive documentation
out there and LLMs have a pretty good handle on it. One thing I will say, this
was quite hard to get set up. I'm not convinced my configuration file is 100%
correct and good but it does work. I have lldap running as the identity provider
(I guess) which is where you add users, and postgres as the database for both
lldap and authelia.
### Defguard
Mostly working, still need to work out the SSL for the gRPC endpoints.
Also, the gateway needs to run as root for reasons I can't quite work out but
its got something to do with creating the network devices for the VPN.
This is the first thing I've set up with proper use of env files which is nice
though.
Endid up deciding to move this to a vim instead.
I've completetly fucked this off, I'm keeping the files for posterity but even
the one click instill didn't work and given I have openvpn up and running just
fine I CBF.
### wg-easy
I'm trying to get a vpn set up and chose wg-easy, which is turing out not to be
easy. It's running but having trouble with the wg part of the whole thing.
references:
- [docker-compose.yml](https://github.com/wg-easy/wg-easy/blob/master/docker-compose.yml)
- [wg-easy Caddy
docs](https://wg-easy.github.io/wg-easy/Pre-release/examples/tutorials/caddy/)
- [wireguard in podman blog
post](https://www.procustodibus.com/blog/2022/10/wireguard-in-podman/)
I find it annoying that I have to make host changes to make this work but it
does sort of make sense given how tied to the network stack VPNs must be.
I found my solution [here in the wg-easy
faq](https://wg-easy.github.io/wg-easy/v15.1/faq/), it was a kernel modules
issue. Also some good info and content
[here](https://wg-easy.github.io/wg-easy/v15.0/examples/tutorials/podman-nft/).
I'm hoping I can have the container rootless but we'll see, might need tobe
rootful given their docs.
for reference:
```
# POST UP
nft add table inet wg_table; nft add chain inet wg_table prerouting { type nat hook prerouting priority 100 \; }; nft add chain inet wg_table postrouting { type nat hook postrouting priority 100 \; }; nft add rule inet wg_table postrouting ip saddr {{ipv4Cidr}} oifname {{device}} masquerade; nft add rule inet wg_table postrouting ip6 saddr {{ipv6Cidr}} oifname {{device}} masquerade; nft add chain inet wg_table input { type filter hook input priority 0 \; policy accept \; }; nft add rule inet wg_table input udp dport {{port}} accept; nft add rule inet wg_table input tcp dport {{uiPort}} accept; nft add chain inet wg_table forward { type filter hook forward priority 0 \; policy accept \; }; nft add rule inet wg_table forward iifname "wg0" accept; nft add rule inet wg_table forward oifname "wg0" accept;
# POST DOWN
nft delete table inet wg_table
```
seems to have done the trick, along with the kernel modules :D

View File

@ -0,0 +1,30 @@
[Unit]
Description=Authelia - DB
[Container]
Pod=authelia.pod
ContainerName=authelia-db
Image=docker.io/library/postgres:17.2-bookworm
# Environment=POSTGRES_PASSWORD=
# Environment=POSTGRES_USER=
# Environment=POSTGRES_DB=
EnvironmentFile=/mnt/data/containers/authelia/.env.db
Volume=/mnt/data/containers/authelia/postgresql:/var/lib/postgresql/data
Volume=/etc/timezone:/etc/timezone:ro
Volume=/etc/localtime:/etc/localtime:ro
# health check
HealthCmd=pg_isready -U pguser -d general
HealthInterval=5s
HealthRetries=3
HealthStartPeriod=15s
HealthTimeout=30s
[Service]
Restart=always
TimeoutStartSec=300
[Install]
WantedBy=default.target

View File

@ -0,0 +1,27 @@
[Unit]
Description=Authelia - Server
After=network-online.target
[Container]
Pod=authelia.pod
ContainerName=authelia-server
Image=docker.io/authelia/authelia:latest
AutoUpdate=registry
# all secrets and config need to be added to configuration.yml
Volume=/mnt/data/containers/authelia/config:/config
Label=homepage.group=Tech
Label=homepage.name=Authelia
Label=homepage.icon=authelia.png
Label=homepage.href=https://auth.inkletblot.com
Label=homepage.description="Auth Provider"
[Service]
Restart=always
TimeoutStartSec=90
[Install]
WantedBy=default.target

12
authelia/authelia.network Normal file
View File

@ -0,0 +1,12 @@
[Unit]
Description=Authelia network
After=network-online.target
[Network]
NetworkName=authelia-network
Subnet=10.8.0.0/24
Gateway=10.8.0.1
DNS=
[Install]
WantedBy=default.target

10
authelia/authelia.pod Normal file
View File

@ -0,0 +1,10 @@
[Pod]
Network=authelia.network
PodName=authelia
# Authelia frontend
PublishPort=9091:9091
# LLDAP frontend
PublishPort=17170:17170

View File

@ -0,0 +1,39 @@
[Unit]
Description=LLDAP - Server
[Container]
Pod=authelia.pod
ContainerName=lldap-server
Image=docker.io/lldap/lldap:stable
# Environment=GID=
# Environment=UID=
# Environment=TZ=
# Environment=LLDAP_LDAP_BASE_DN=
# Environment=LLDAP_DATABASE_URL=
# Environment=LLDAP_LDAP_USER_EMAIL=
# Environment=LLDAP_LDAP_USER_PASS=
# Environment=LLDAP_JWT_SECRET=
# Environment=LLDAP_KEY_SEED=
EnvironmentFile=/mnt/data/containers/authelia/.env.lldap
# health check
HealthCmd=/app/lldap healthcheck
HealthInterval=30s
HealthRetries=3
HealthStartPeriod=15s
HealthTimeout=30s
Label=homepage.group=Tech
Label=homepage.name=LLDAP
Label=homepage.icon=lldap.png
Label=homepage.href=http://lldap.forest:17170
Label=homepage.description="Authelia's IDP"
[Service]
Restart=always
TimeoutStartSec=300
[Install]
WantedBy=default.target

29
caddy/caddy.container Normal file
View File

@ -0,0 +1,29 @@
[Unit]
Description=Caddy
After=network-online.target
[Container]
Pod=caddy.pod
ContainerName=caddy
Image=docker.io/caddy:2.11
AutoUpdate=registry
Volume=/mnt/data/containers/caddy/config:/config
Volume=/mnt/data/containers/caddy/data:/data
Volume=/mnt/data/containers/caddy/conf:/etc/caddy
# for static site files
Volume=/mnt/data/containers/caddy/srv:/srv
Label=homepage.group=Misc.
Label=homepage.name=Caddy
Label=homepage.icon=caddy.png
Label=homepage.description="Web Server"
[Service]
Restart=always
TimeoutStartSec=90
[Install]
WantedBy=default.target

12
caddy/caddy.network Normal file
View File

@ -0,0 +1,12 @@
[Unit]
Description=Caddy network
After=network-online.target
[Network]
NetworkName=caddy-network
Subnet=10.36.0.0/24
Gateway=10.36.0.1
DNS=
[Install]
WantedBy=default.target

9
caddy/caddy.pod Normal file
View File

@ -0,0 +1,9 @@
[Pod]
PodName=caddy
Network=host
# Network=caddy.network
# PublishPort=80:80
# PublishPort=443:443
# PublishPort=443:443/udp
# PublishPort=2019:2019

28
calibre/calibre.container Normal file
View File

@ -0,0 +1,28 @@
[Unit]
Description=Calibre - Server
[Container]
Pod=calibre.pod
ContainerName=calibre
Image=lscr.io/linuxserver/calibre-web:latest
# Environment=PUID=
# Environment=PGID=
# Environment=TZ=
EnvironmentFile=/mnt/data/containers/calibre/.env.calibre
Volume=/mnt/data/containers/calibre/config:/config
Volume=/mnt/data/containers/calibre/library:/books
Label=homepage.group=Life
Label=homepage.name=Calibre
Label=homepage.icon=calibre.png
Label=homepage.href=https://books.inkletblot.com
Label=homepage.description="Books"
[Service]
Restart=always
TimeoutStartSec=300
[Install]
WantedBy=default.target

9
calibre/calibre.network Normal file
View File

@ -0,0 +1,9 @@
[Unit]
Description=Calibre network
After=network-online.target
[Network]
NetworkName=calibre-network
[Install]
WantedBy=default.target

5
calibre/calibre.pod Normal file
View File

@ -0,0 +1,5 @@
[Pod]
Network=calibre.network
PodName=calibre
PublishPort=8338:8083

View File

@ -0,0 +1,14 @@
[Unit]
Description=Cryptgeon - Redis
[Container]
Pod=cryptgeon.pod
ContainerName=cryptgeon-redis
Image=docker.io/library/redis:latest
AutoUpdate=registry
[Service]
Restart=on-failure
RestartSec=5
StartLimitBurst=5

View File

@ -0,0 +1,27 @@
[Unit]
Description=Cryptgeon - Server
After=cryptgeon-redis.service
Wants=cryptgeon-redis.service
[Container]
Pod=cryptgeon.pod
ContainerName=cryptgeon
Image=docker.io/cupcakearmy/cryptgeon:latest
AutoUpdate=registry
# Environment=SIZE_LIMIT=
# Environment=REDIS=
EnvironmentFile=/mnt/data/containers/cryptgeon/.env.cryptgeon
Label=homepage.group=Tech
Label=homepage.name=Cryptgeon
Label=homepage.href=https://ots.inkletblot.com
Label=homepage.description="Like Privnote"
[Service]
Restart=always
RestartSec=5
StartLimitBurst=5
[Install]
WantedBy=default.target

View File

@ -0,0 +1,11 @@
[Unit]
Description=Cryptgeon network
After=network-online.target
[Network]
NetworkName=cryptgeon-network
Subnet=10.43.0.0/24
Gateway=10.43.0.1
[Install]
WantedBy=default.target

4
cryptgeon/cryptgeon.pod Normal file
View File

@ -0,0 +1,4 @@
[Pod]
Network=cryptgeon.network
PodName=cryptgeon
PublishPort=3080:8000

1
defguard/UNUSED Normal file
View File

@ -0,0 +1 @@
I had some issues with this that I coludn't overcome. I'm going to move to a vm instead and use the one and done script to set it up.

View File

@ -0,0 +1,29 @@
[Unit]
Description=Defguard - Core
After=network-online.target
[Container]
Pod=defguard.pod
ContainerName=defguard-core
Image=ghcr.io/defguard/defguard:latest
AutoUpdate=registry
EnvironmentFile=/mnt/data/containers/defguard/.env
Volume=/mnt/data/containers/defguard/rsakey.pem:/keys/rsakey.pem
Volume=/mnt/data/containers/defguard/ca.crt:/keys/ca.crt
Volume=/mnt/data/containers/defguard/core.crt:/keys/core.crt
Volume=/mnt/data/containers/defguard/core.key:/keys/core.key
Label=homepage.group=Tech
Label=homepage.name="Defguard Core"
Label=homepage.icon=defguard.png
Label=homepage.href=https://guard.inkletblot.com
Label=homepage.description="VPN"
[Service]
Restart=always
TimeoutStartSec=90
[Install]
WantedBy=default.target

View File

@ -0,0 +1,27 @@
[Unit]
Description=Defguard - DB
[Container]
Pod=defguard.pod
ContainerName=defguard-db
Image=docker.io/postgres:17-alpine
EnvironmentFile=/mnt/data/containers/defguard/.env
Volume=/mnt/data/containers/defguard/postgresql:/var/lib/postgresql/data
Volume=/etc/timezone:/etc/timezone:ro
Volume=/etc/localtime:/etc/localtime:ro
# health check
HealthCmd=pg_isready -U defguarduser -d defguard
HealthInterval=5s
HealthRetries=3
HealthStartPeriod=15s
HealthTimeout=30s
[Service]
Restart=always
TimeoutStartSec=300
[Install]
WantedBy=default.target

View File

@ -0,0 +1,31 @@
# THIS IS A ROOT CONTAINER
# Must be configured AFTER core.
[Unit]
Description=Defguard - Gateway
After=network-online.target
[Container]
ContainerName=defguard-gateway
Image=ghcr.io/defguard/gateway:latest
AutoUpdate=registry
Network=host
AddCapability=NET_ADMIN
EnvironmentFile=/mnt/data/containers/defguard/.env
Environment=DEFGUARD_LOG_LEVEL=debug
Volume=/mnt/data/containers/defguard/ca.crt:/ca.crt
Label=homepage.group=Misc.
Label=homepage.name="Defguard Gateway"
Label=homepage.icon=defguard.png
Label=homepage.description="Auth Provider"
[Service]
Restart=always
TimeoutStartSec=90
[Install]
WantedBy=default.target

View File

@ -0,0 +1,28 @@
[Unit]
Description=Defguard - Proxy
After=network-online.target
[Container]
Pod=defguard.pod
ContainerName=defguard-proxy
Image=ghcr.io/defguard/defguard-proxy:latest
AutoUpdate=registry
Environment=DEFGUARD_PROXY_GRPC_CERT=/ca/proxy.cert
Environment=DEFGUARD_PROXY_GRPC_KEY=/ca/proxy.key
Volume=/mnt/data/containers/defguard/proxy.crt:/ca/proxy.crt
Volume=/mnt/data/containers/defguard/proxy.key:/ca/proxy.key
Label=homepage.group=Tech
Label=homepage.name="Defguard Proxy"
Label=homepage.icon=defguard.png
Label=homepage.href=https://enroll.inkletblot.com
Label=homepage.description="VPN Enrollment"
[Service]
Restart=always
TimeoutStartSec=90
[Install]
WantedBy=default.target

12
defguard/defguard.network Normal file
View File

@ -0,0 +1,12 @@
[Unit]
Description=Defguard network
After=network-online.target
[Network]
NetworkName=defguard-network
Subnet=10.98.0.0/24
Gateway=10.98.0.1
DNS=
[Install]
WantedBy=default.target

15
defguard/defguard.pod Normal file
View File

@ -0,0 +1,15 @@
[Pod]
Network=defguard.network
PodName=defguard
# core
# frontend (administration interface)
PublishPort=9876:8000
# gRPC
PublishPort=50055:50055
# proxy
# frontend (enrollment service)
PublishPort=8765:8080
# gRPC
PublishPort=50051:50051

View File

@ -0,0 +1,34 @@
[Unit]
Description=Drone Docker Agent
Requires=drone-server.container
After=drone-server.container
[Container]
Pod=drone.pod
ContainerName=drone-docker-agent
Image=docker.io/drone/drone-runner-docker:1
# Environment=DRONE_RPC_PROTO=
# Environment=DRONE_RPC_HOST=
# Environment=DRONE_RPC_SECRET=
# Environment=DRONE_RUNNER_CAPACITY=
# Environment=DRONE_RUNNER_NAME=
# Environment=DRONE_UI_USERNAME=
# Environment=DRONE_UI_PASSWORD=
# Environment=DRONE_HTTP_BIND=
EnvironmentFile=/mnt/data/containers/drone/.env.drone-docker-agent
Volume=/run/user/1000/podman/podman.sock:/var/run/docker.sock
Label=homepage.group=Misc.
Label=homepage.name=Drone Docker Agent
Label=homepage.icon=drone.png
Label=homepage.href=http://192.168.2.61:3010
Label=homepage.description="CI/CD docker agent"
[Service]
Restart=always
TimeoutStartSec=300
[Install]
WantedBy=default.target

View File

@ -0,0 +1,32 @@
[Unit]
Description=Drone Server
[Container]
Pod=drone.pod
ContainerName=drone-server
Image=docker.io/drone/drone:2
# Environment=DRONE_SERVER_HOST=
# Environment=DRONE_RPC_SECRET=
# Environment=DRONE_SERVER_PROTO=
# Environment=DRONE_GITEA_SERVER=
# Environment=DRONE_GITEA_CLIENT_ID=
# Environment=DRONE_GITEA_CLIENT_SECRET=
EnvironmentFile=/mnt/data/containers/drone/.env.drone-server
Volume=/mnt/data/containers/drone/server:/data
HealthCmd=nc -z 127.0.0.1 80
Label=homepage.group=Tech
Label=homepage.name=Drone
Label=homepage.icon=drone.png
Label=homepage.href=https://drone.inkletblot.com
Label=homepage.description="CI/CD"
[Service]
Restart=always
TimeoutStartSec=300
[Install]
WantedBy=default.target

View File

@ -0,0 +1,31 @@
[Unit]
Description=Drone SSH Agent
Requires=drone-server.container
After=drone-server.container
[Container]
Pod=drone.pod
ContainerName=drone-ssh-agent
Image=docker.io/drone/drone-runner-ssh:1
# Environment=DRONE_RPC_PROTO=
# Environment=DRONE_RPC_HOST=
# Environment=DRONE_RPC_SECRET=
# Environment=DRONE_RUNNER_CAPACITY=
# Environment=DRONE_RUNNER_NAME=
# Environment=DRONE_UI_USERNAME=
# Environment=DRONE_UI_PASSWORD=
EnvironmentFile=/mnt/data/containers/drone/.env.drone-ssh-agent
Label=homepage.group=Misc.
Label=homepage.name=Drone SSH Agent
Label=homepage.icon=drone.png
Label=homepage.href=http://192.168.2.61:3000
Label=homepage.description="CI/CD SSH agent"
[Service]
Restart=always
TimeoutStartSec=300
[Install]
WantedBy=default.target

12
drone/drone.network Normal file
View File

@ -0,0 +1,12 @@
[Unit]
Description=Drone network
After=network-online.target
[Network]
NetworkName=drone-network
Subnet=10.3.0.0/24
Gateway=10.3.0.1
DNS=
[Install]
WantedBy=default.target

7
drone/drone.pod Normal file
View File

@ -0,0 +1,7 @@
[Pod]
Network=drone.network
PodName=drone
PublishPort=8980:80
PublishPort=3010:3010
PublishPort=3020:3000

View File

@ -0,0 +1,36 @@
[Unit]
Description=Firefly - DB
[Container]
Pod=firefly.pod
ContainerName=firefly-db
Image=docker.io/mariadb:latest
AutoUpdate=registry
# Persistent volumes
Volume=/mnt/data/containers/firefly/mariadb:/var/lib/mysql
# Environment variables
# Environment=MARIADB_USER=
# Environment=MARIADB_DATABASE=
# Environment=MARIADB_PASSWORD=
# Environment=MARIADB_ROOT_PASSWORD=
EnvironmentFile=/mnt/data/containers/firefly/.env.firefly-db
# Health monitoring
HealthCmd=healthcheck.sh --connect
HealthInterval=30s
HealthTimeout=15s
HealthRetries=10
HealthStartPeriod=15s
# Other
UserNS=keep-id:uid=999,gid=999
[Service]
Restart=always
TimeoutStartSec=300
[Install]
WantedBy=default.target

View File

@ -0,0 +1,37 @@
[Unit]
Description=Firefly - Server
Requires=firefly-db.service
After=firefly-db.service
[Container]
Pod=firefly.pod
ContainerName=firefly-server
Image=docker.io/fireflyiii/core:latest
# Environment=APP_KEY=
# Environment=DB_HOST=
# Environment=DB_PORT=
# Environment=DB_CONNECTION=
# Environment=DB_DATABASE=
# Environment=DB_PASSWORD=
# Environment=DB_USERNAME=
# Environment=FORCE_HTTPS=
# Environment=TRUSTED_PROXIES=
# Environment=DEFAULT_LOCALE=
EnvironmentFile=/mnt/data/containers/firefly/.env.firefly-server
Volume=/mnt/data/containers/firefly/data:/var/www/html/storage/upload
Volume=/usr/lib/locale/locale-archive:/usr/lib/locale/locale-archive:Z
Label=homepage.group=Life
Label=homepage.name="Firefly"
Label=homepage.icon=firefly.png
Label=homepage.href=http://firefly.forest
Label=homepage.description="Budgeting"
[Service]
Restart=always
TimeoutStartSec=300
[Install]
WantedBy=default.target

12
firefly/firefly.network Normal file
View File

@ -0,0 +1,12 @@
[Unit]
Description=Firefly network
After=network-online.target
[Network]
NetworkName=firefly-network
Subnet=10.13.0.0/24
Gateway=10.13.0.1
DNS=
[Install]
WantedBy=default.target

5
firefly/firefly.pod Normal file
View File

@ -0,0 +1,5 @@
[Pod]
Network=firefly.network
PodName=firefly
PublishPort=8342:8080

23
gitea/gitea-db.container Normal file
View File

@ -0,0 +1,23 @@
[Unit]
Description=Gitea DB Server
[Container]
Pod=gitea.pod
ContainerName=gitea-db
Image=docker.io/library/postgres:17.2-bookworm
# Environment=POSTGRES_PASSWORD=
# Environment=POSTGRES_USER=
# Environment=POSTGRES_DB=
EnvironmentFile=/mnt/data/containers/gitea/.env.gitea-db
Volume=/mnt/data/containers/gitea/postgresql:/var/lib/postgresql/data
Volume=/etc/timezone:/etc/timezone:ro
Volume=/etc/localtime:/etc/localtime:ro
[Service]
Restart=always
TimeoutStartSec=300
[Install]
WantedBy=default.target

View File

@ -0,0 +1,37 @@
[Unit]
Description=Gitea Server
Requires=gitea-db.service
After=gitea-db.service
[Container]
Pod=gitea.pod
ContainerName=gitea-server
Image=docker.io/gitea/gitea:1.22.4
# Environment=USER_ID=
# Environment=USER_GID=
# Environment=DB_TYPE=
# Environment=DB_HOST=
# Environment=DB_NAME=
# Environment=DB_PASSWD=
# Environment=DB_USER=
EnvironmentFile=/mnt/data/containers/gitea/.env.gitea-server
Volume=/mnt/data/containers/gitea/data:/data
Volume=/etc/timezone:/etc/timezone:ro
Volume=/etc/localtime:/etc/localtime:ro
HealthCmd=nc -z 127.0.0.1 5432
Label=homepage.group=Tech
Label=homepage.name=Gitea
Label=homepage.icon=gitea.png
Label=homepage.href=https://git.inkletblot.com
Label=homepage.description="Version Control"
[Service]
Restart=always
TimeoutStartSec=300
[Install]
WantedBy=default.target

12
gitea/gitea.network Normal file
View File

@ -0,0 +1,12 @@
[Unit]
Description=Gitea network
After=network-online.target
[Network]
NetworkName=gitea-network
Subnet=10.1.0.0/24
Gateway=10.1.0.1
DNS=
[Install]
WantedBy=default.target

6
gitea/gitea.pod Normal file
View File

@ -0,0 +1,6 @@
[Pod]
Network=gitea.network
PodName=gitea
PublishPort=3000:4000
PublishPort=6122:22

26
glance/glance.container Normal file
View File

@ -0,0 +1,26 @@
[Unit]
Description=Glance Dashboard
[Container]
ContainerName=glance
Pod=glance.pod
Image=docker.io/glanceapp/glance:latest
AutoUpdate=registry
Volume=/mnt/data/containers/glance/config:/app/config:Z
Volume=/mnt/data/containers/glance/assets:/app/assets:Z
Volume=/etc/localtime:/etc/localtime:ro
Volume=/run/user/1000/podman/podman.sock:/run/podman/podman.sock
Label=homepage.group=Productivity
Label=homepage.name=Glance
Label=homepage.icon=glance.png
Label=homepage.href=https://dashboard.inkletblot.com
Label=homepage.description="Glance Dashboard"
[Service]
Restart=always
TimeoutStartSec=900
[Install]
WantedBy=default.target

9
glance/glance.network Normal file
View File

@ -0,0 +1,9 @@
[Unit]
Description=Glance network
After=network-online.target
[Network]
NetworkName=glance-network
[Install]
WantedBy=default.target

4
glance/glance.pod Normal file
View File

@ -0,0 +1,4 @@
[Pod]
PodName=glance
Network=glance.network
PublishPort=8195:8080

View File

@ -0,0 +1,34 @@
[Unit]
Description=Homepage Dashboard
Requires=podman.socket
After=podman.socket
[Container]
ContainerName=homepage
Pod=homepage.pod
Image=ghcr.io/gethomepage/homepage:latest
AutoUpdate=registry
# Can't be bothered with env file for this...
Environment=HOMEPAGE_ALLOWED_HOSTS=*
Volume=/mnt/data/containers/homepage/data:/app/config:Z
Volume=/mnt/data/containers/homepage/data/images:/app/public/images:Z
Volume=/run/user/1000/podman/podman.sock:/run/podman/podman.sock
# for resource usage
Volume=/mnt/audio:/mnt/audio:ro
Volume=/mnt/video:/mnt/video:ro
Volume=/mnt/photo:/mnt/photo:ro
Volume=/mnt/data:/mnt/data:ro
Volume=/mnt/backup:/mnt/backup:ro
SecurityLabelDisable=true
[Service]
Restart=always
TimeoutStartSec=900
[Install]
WantedBy=default.target

12
homepage/homepage.network Normal file
View File

@ -0,0 +1,12 @@
[Unit]
Description=Homepage network
After=network-online.target
[Network]
NetworkName=homepage-network
Subnet=10.26.0.0/24
Gateway=10.26.0.1
DNS=
[Install]
WantedBy=default.target

4
homepage/homepage.pod Normal file
View File

@ -0,0 +1,4 @@
[Pod]
PodName=homepage
Network=homepage.network
PublishPort=8030:3000

View File

@ -0,0 +1,33 @@
[Unit]
Description=Immich - DB
Wants=network-online.target
After=network-online.target
[Container]
Pod=immich.pod
ContainerName=immich-db
Image=ghcr.io/immich-app/postgres:14-vectorchord0.4.3-pgvectors0.2.0@sha256:bcf63357191b76a916ae5eb93464d65c07511da41e3bf7a8416db519b40b1c23
ShmSize=128mb
# Environment=POSTGRES_PASSWORD=
# Environment=POSTGRES_USER=
# Environment=POSTGRES_DB=
# Environment=DB_STORAGE_TYPE=
EnvironmentFile=/mnt/data/containers/immich/.env.immich-db
Volume=/mnt/data/containers/immich/postgresql:/var/lib/postgresql/data
Volume=/etc/timezone:/etc/timezone:ro
Volume=/etc/localtime:/etc/localtime:ro
HealthCmd=pg_isready -U immichuser -d immich
HealthInterval=5s
HealthRetries=3
HealthStartPeriod=15s
HealthTimeout=30s
[Service]
Restart=on-failure
RestartSec=5
StartLimitBurst=5

View File

@ -0,0 +1,17 @@
[Unit]
Description=Immich - Machine Learning
[Container]
Pod=immich.pod
Image= ghcr.io/immich-app/immich-machine-learning:v2.4.1
ContainerName=immich-ml
Volume=/mnt/data/containers/immich/modelcache:/cache
[Service]
Restart=always
TimeoutStartSec=900
SuccessExitStatus=0 143
[Install]
WantedBy=default.target

View File

@ -0,0 +1,14 @@
[Unit]
Description=Immich - Redis
[Container]
Pod=immich.pod
ContainerName=immich-redis
Image=docker.io/valkey/valkey:9@sha256:fb8d272e529ea567b9bf1302245796f21a2672b8368ca3fcb938ac334e613c8f
AutoUpdate=registry
[Service]
Restart=on-failure
RestartSec=5
StartLimitBurst=5

View File

@ -0,0 +1,41 @@
[Unit]
Description=Immich - Server
Wants=immich-db.service
After=immich-db.service
After=immich-redis.service
After=immich-ml.service
[Container]
Pod=immich.pod
Image=ghcr.io/immich-app/immich-server:v2.4.1
ContainerName=immich-server
# Environment=TZ=
# Environment=DB_USERNAME=
# Environment=DB_PASSWORD=
# Environment=DB_DATABASE_NAME=
# Environment=DB_HOSTNAME=
# Environment=DB_PORT=
# Environment=REDIS_HOSTNAME=
# Environment=REDIS_PORT=
EnvironmentFile=/mnt/data/containers/immich/.env.immich-server
Volume=/mnt/photo/Upload:/data
Volume=/mnt/photo/Library/:/mnt/Library:ro
Volume=/etc/localtime:/etc/localtime:ro
Label=homepage.group=Documents/Backup
Label=homepage.name=Immich
Label=homepage.icon=immich.png
Label=homepage.href=https://immich.inkletblot.com
Label=homepage.description="Photo Library"
[Service]
Restart=always
TimeoutStartSec=900
SuccessExitStatus=0 143
[Install]
WantedBy=default.target

12
immich/immich.network Normal file
View File

@ -0,0 +1,12 @@
[Unit]
Description=Immich network
After=network-online.target
[Network]
NetworkName=immich-network
Subnet=10.16.0.0/24
Gateway=10.16.0.1
DNS=
[Install]
WantedBy=default.target

4
immich/immich.pod Normal file
View File

@ -0,0 +1,4 @@
[Pod]
Network=immich.network
PodName=immich
PublishPort=2283:2283

View File

@ -0,0 +1,43 @@
[Unit]
Description=Jellyfin
Wants=network-online.target
After=network-online.target
[Container]
Image=docker.io/jellyfin/jellyfin:latest
AutoUpdate=registry
ContainerName=jellyfin
EnvironmentFile=/mnt/data/containers/jellyfin/.env.jellyfin
# due to migrating an existing installation the following is required
# see https://jellyfin.org/docs/general/administration/migrate/
Volume=/mnt/data/containers/jellyfin/cache:/var/cache/jellyfin
Volume=/mnt/data/containers/jellyfin/config:/etc/jellyfin
Volume=/mnt/data/containers/jellyfin/data:/var/lib/jellyfin
Volume=/mnt/data/containers/jellyfin/log:/var/log/jellyfin
# these need to match the source system, from the fstab:
# <nas ip or hostname>:/video /mnt/media nfs defaults 0 1
# <nas ip or hostname>:/audio /mnt/music nfs defaults 0 1
# <nas ip or hostname>:/photo /mnt/camera nfs defaults 0 1
Volume=/mnt/video:/mnt/media
Volume=/mnt/audio:/mnt/music
Volume=/mnt/photo:/mnt/camera
PublishPort=8096:8096
Label=homepage.group=Media
Label=homepage.name=Jellyfin
Label=homepage.icon=jellyfin.png
Label=homepage.href=https://jellyfin.inkletblot.com
Label=homepage.description="Stream Media"
[Service]
Restart=always
TimeoutStartSec=900
SuccessExitStatus=0 143
[Install]
WantedBy=default.target

3
koel/UNUSED Normal file
View File

@ -0,0 +1,3 @@
This looked really good, and is very popular, but also seemingly totally unknown. I can't seem to find any stuff on google about it.
Also, although the UI is nice, no feedback about syncing etc, is a no go for me.

36
koel/koel-db.container Normal file
View File

@ -0,0 +1,36 @@
[Unit]
Description=Koel - DB
[Container]
Pod=koel.pod
ContainerName=koel-db
Image=docker.io/mariadb:latest
AutoUpdate=registry
# Persistent volumes
Volume=/mnt/data/containers/koel/mariadb:/var/lib/mysql
# Environment variables
# Environment=MARIADB_USER=
# Environment=MARIADB_DATABASE=
# Environment=MARIADB_PASSWORD=
# Environment=MARIADB_ROOT_PASSWORD=
EnvironmentFile=/mnt/data/containers/koel/.env.koel-db
# Health monitoring
HealthCmd=healthcheck.sh --connect
HealthInterval=30s
HealthTimeout=15s
HealthRetries=10
HealthStartPeriod=15s
# Other
UserNS=keep-id:uid=999,gid=999
[Service]
Restart=always
TimeoutStartSec=300
[Install]
WantedBy=default.target

View File

@ -0,0 +1,35 @@
[Unit]
Description=Koel - Server
Requires=koel-db.service
After=koel-db.service
[Container]
Pod=koel.pod
ContainerName=koel-server
Image=docker.io/phanan/koel
# Environment=APP_KEY=
# Environment=DB_HOST=
# Environment=DB_DATABASE=
# Environment=DB_PASSWORD=
# Environment=DB_USERNAME=
# Environment=FORCE_HTTPS=
EnvironmentFile=/mnt/data/containers/koel/.env.koel-server
Volume=/mnt/data/containers/koel/image_storage:/var/www/html/public/img/storage
Volume=/mnt/data/containers/koel/search_index:/var/www/html/storage/search-indexes
Volume=/mnt/audio/Sorted:/music
Label=homepage.group=Media
Label=homepage.name=Koel
Label=homepage.icon=koel.png
Label=homepage.href=https://koel.inkletblot.com
Label=homepage.description="Music Streaming"
[Service]
Restart=always
TimeoutStartSec=300
[Install]
WantedBy=default.target

12
koel/koel.network Normal file
View File

@ -0,0 +1,12 @@
[Unit]
Description=Koel network
After=network-online.target
[Network]
NetworkName=koel-network
Subnet=10.12.0.0/24
Gateway=10.12.0.1
DNS=
[Install]
WantedBy=default.target

5
koel/koel.pod Normal file
View File

@ -0,0 +1,5 @@
[Pod]
Network=koel.network
PodName=koel
PublishPort=8332:80

View File

@ -0,0 +1,29 @@
[Unit]
Description=Mealie - DB
[Container]
Pod=mealie.pod
ContainerName=mealie-db
Image=docker.io/postgres:17
# Environment=POSTGRES_PASSWORD=
# Environment=POSTGRES_USER=
# Environment=POSTGRES_DB=
EnvironmentFile=/mnt/data/containers/mealie/.env.mealie-db
Volume=/mnt/data/containers/mealie/postgresql:/var/lib/postgresql/data
Volume=/etc/timezone:/etc/timezone:ro
Volume=/etc/localtime:/etc/localtime:ro
HealthCmd=pg_isready -U mealieuser -d mealie
HealthInterval=5s
HealthRetries=3
HealthStartPeriod=15s
HealthTimeout=30s
[Service]
Restart=always
TimeoutStartSec=300
[Install]
WantedBy=default.target

View File

@ -0,0 +1,47 @@
[Unit]
Description=Mealie - Server
[Container]
Pod=mealie.pod
ContainerName=mealie-server
Image=ghcr.io/mealie-recipes/mealie:v3.9.1
# Environment=ALLOW_SIGNUP=
# Environment=PUID=
# Environment=PGID=
# Environment=TZ=
# Environment=BASE_URL=
# Database
# Environment=DB_ENGINE=
# Environment=POSTGRES_USER=
# Environment=POSTGRES_PASSWORD=
# Environment=POSTGRES_SERVER=
# Environment=POSTGRES_PORT=
# Environment=POSTGRES_DB=
# SMTP
# Environment=SMTP_HOST=
# Environment=SMTP_PORT=
# Environment=SMTP_FROM_NAME=
# Environment=SMTP_AUTH_STRATEGY=
# Environment=SMTP_FROM_EMAIL=
# Environment=SMTP_USER=
# Environment=SMTP_PASSWORD=
EnvironmentFile=/mnt/data/containers/mealie/.env.mealie-server
Volume=/mnt/data/containers/mealie/data:/app/data
Label=homepage.group=Life
Label=homepage.name=Mealie
Label=homepage.icon=mealie.png
Label=homepage.href=https://mealie.inkletblot.com
Label=homepage.description="Food, Glorious Food!"
[Service]
Restart=always
TimeoutStartSec=900
[Install]
WantedBy=default.target

12
mealie/mealie.network Normal file
View File

@ -0,0 +1,12 @@
[Unit]
Description=Mealie network
After=network-online.target
[Network]
NetworkName=mealie-network
Subnet=10.15.0.0/24
Gateway=10.15.0.1
DNS=
[Install]
WantedBy=default.target

5
mealie/mealie.pod Normal file
View File

@ -0,0 +1,5 @@
[Pod]
Network=mealie.network
PodName=mealie
PublishPort=9925:9000

29
memos/memos.container Normal file
View File

@ -0,0 +1,29 @@
[Unit]
Description=Memos
[Container]
ContainerName=memos
Image=docker.io/neosmemo/memos:stable
PublishPort=5230:5230
# Environment=MEMOS_MODE=
# Environment=MEMOS_ADDR=
# Environment=MEMOS_PORT=
# Environment=MEMOS_DATA=
EnvironmentFile=/mnt/data/containers/memos/.env.memos
Volume=/mnt/data/containers/memos/data:/var/opt/memos
Label=homepage.group=Productivity
Label=homepage.name=Memos
Label=homepage.icon=memos.png
Label=homepage.href=https://memos.inkletblot.com
Label=homepage.description="Note Taking"
[Service]
Restart=always
TimeoutStartSec=300
[Install]
WantedBy=default.target

View File

@ -0,0 +1,36 @@
[Unit]
Description=Monica - DB
[Container]
Pod=monica.pod
ContainerName=monica-db
Image=docker.io/mariadb:11.8
AutoUpdate=registry
# Persistent volumes
Volume=/mnt/data/containers/monica/mariadb:/var/lib/mysql
# Environment variables
# Environment=MARIADB_USER=
# Environment=MARIADB_DATABASE=
# Environment=MARIADB_PASSWORD=
# Environment=MARIADB_ROOT_PASSWORD=
EnvironmentFile=/mnt/data/containers/monica/.env.monica-db
# Health monitoring
HealthCmd=healthcheck.sh --connect
HealthInterval=30s
HealthTimeout=15s
HealthRetries=10
HealthStartPeriod=15s
# Other
UserNS=keep-id:uid=999,gid=999
[Service]
Restart=always
TimeoutStartSec=300
[Install]
WantedBy=default.target

View File

@ -0,0 +1,36 @@
[Unit]
Description=Monica - Server
Requires=monica-db.service
After=monica-db.service
[Container]
Pod=monica.pod
ContainerName=monica-server
Image=docker.io/monica
# Environment=APP_ENV=
# Environment=APP_KEY=
# Environment=DB_HOST=
# Environment=DB_DATABASE=
# Environment=DB_PASSWORD=
# Environment=DB_USERNAME=
# Environment=LOG_CHANNEL=
# Environment=CACHE_DRIVER=
# Environment=SESSION_DRIVER=
# Environment=QUEUE_DRIVER=
EnvironmentFile=/mnt/data/containers/monica/.env.monica-server
Volume=/mnt/data/containers/monica/data:/var/www/html/storage
Label=homepage.group=Life
Label=homepage.name=Monica
Label=homepage.icon=monica.png
Label=homepage.href=http://monica.forest
Label=homepage.description="CRM your social life"
[Service]
Restart=always
TimeoutStartSec=300
[Install]
WantedBy=default.target

12
monica/monica.network Normal file
View File

@ -0,0 +1,12 @@
[Unit]
Description=Monica network
After=network-online.target
[Network]
NetworkName=monica-network
Subnet=10.11.0.0/24
Gateway=10.11.0.1
DNS=
[Install]
WantedBy=default.target

5
monica/monica.pod Normal file
View File

@ -0,0 +1,5 @@
[Pod]
Network=monica.network
PodName=monica
PublishPort=8232:80

View File

@ -0,0 +1,34 @@
[Unit]
Description=Navidrome
[Container]
ContainerName=navidrome
Image=docker.io/deluan/navidrome:latest
AutoUpdate=registry
# Environment=ND_LOGLEVEL=
# Environment=ND_ENABLEINSIGHTSCOLLECTOR=
# Environment=ND_RECENTLYADDEDBYMODTIME=
# Environment=ND_LASTFM_ENABLED=
# Environment=ND_AUTOIMPORTPLAYLISTS=
# Environment=ND_ENABLESHARING=
EnvironmentFile=/mnt/data/containers/navidrome/.env.navidrome
PublishPort=4533:4533
Volume=/mnt/data/containers/navidrome/data:/data
Volume=/mnt/audio/Sorted:/music:ro
Volume=/mnt/audio/Playlists:/playlists:ro
Label=homepage.group=Media
Label=homepage.name=Navidrome
Label=homepage.icon=navidrome.png
Label=homepage.href=https://navidrome.inkletblot.com
Label=homepage.description="Music Streaming"
[Service]
Restart=always
TimeoutStartSec=900
[Install]
WantedBy=default.target

View File

@ -0,0 +1,36 @@
[Unit]
Description=Nextcloud - DB
Wants=network-online.target
After=network-online.target
[Container]
Pod=nextcloud.pod
ContainerName=nextcloud-db
Image=docker.io/mariadb:11.8
AutoUpdate=registry
# Persistent volumes
Volume=/mnt/data/containers/nextcloud/db:/var/lib/mysql
# Environment variables
# Environment=MARIADB_USER=
# Environment=MARIADB_DATABASE=
# Environment=MARIADB_PASSWORD=
# Environment=MARIADB_ROOT_PASSWORD=
EnvironmentFile=/mnt/data/containers/nextcloud/.env.nextcloud-db
# Health monitoring
HealthCmd=healthcheck.sh --connect
HealthInterval=30s
HealthTimeout=15s
HealthRetries=10
HealthStartPeriod=15s
# Other
UserNS=keep-id:uid=999,gid=999
[Service]
Restart=on-failure
RestartSec=5
StartLimitBurst=5

View File

@ -0,0 +1,14 @@
[Unit]
Description=Nextcloud - Redis
[Container]
Pod=nextcloud.pod
ContainerName=nextcloud-redis
Image=docker.io/library/redis:latest
AutoUpdate=registry
[Service]
Restart=on-failure
RestartSec=5
StartLimitBurst=5

View File

@ -0,0 +1,49 @@
[Unit]
Description=Nextcloud - Server
Requires=nextcloud-db.service
After=nextcloud-db.service
[Container]
Pod=nextcloud.pod
ContainerName=nextcloud-server
Image=docker.io/library/nextcloud:latest
AutoUpdate=registry
# Volumes
Volume=/mnt/data/containers/nextcloud/nextcloud:/var/www/html
Volume=/mnt/data/containers/nextcloud/custom_apps/:/var/www/html/custom_apps
Volume=/mnt/data/containers/nextcloud/config:/var/www/html/config
Volume=/mnt/data/containers/nextcloud/data:/var/www/html/data
# Environment variables
# Environment=APACHE_DISABLE_REWRITE_IP=
# Environment=TRUSTED_PROXIES=
# Database variables
# Environment=MYSQL_USER=
# Environment=MYSQL_DATABASE=
# Environment=MYSQL_HOST=
# Environment=MYSQL_PASSWORD=
# Default admin user and password
# Environment=NEXTCLOUD_ADMIN_USER=
# Environment=NEXTCLOUD_ADMIN_PASSWORD=
# Redis variables
# Environment=REDIS_HOST=
# Environment=REDIS_PORT=
EnvironmentFile=/mnt/data/containers/nextcloud/.env.nextcloud-server
Label=homepage.group=Documents/Backup
Label=homepage.name=Nextcloud
Label=homepage.icon=nextcloud.png
Label=homepage.href=https://cloud.inkletblot.com
Label=homepage.description="Files"
[Service]
Restart=always
RestartSec=5
StartLimitBurst=5
[Install]
WantedBy=default.target

View File

@ -0,0 +1,12 @@
[Unit]
Description=Nextcloud network
After=network-online.target
[Network]
NetworkName=nextcloud-network
Subnet=10.4.0.0/24
Gateway=10.4.0.1
DNS=
[Install]
WantedBy=default.target

7
nextcloud/nextcloud.pod Normal file
View File

@ -0,0 +1,7 @@
[Pod]
Network=nextcloud.network
PodName=nextcloud
PublishPort=4080:80
# for access to database
# PublishPort=13306:3306

1
onetimesecret/UNUSED Normal file
View File

@ -0,0 +1 @@
It's called one time secret but it doesn't seem to have a way to limit a secret to one view...

View File

@ -0,0 +1,14 @@
[Unit]
Description=OneTimeSecret - Redis
[Container]
Pod=onetimesecret.pod
ContainerName=onetimesecret-redis
Image=docker.io/library/redis:latest
AutoUpdate=registry
[Service]
Restart=on-failure
RestartSec=5
StartLimitBurst=5

View File

@ -0,0 +1,48 @@
[Unit]
Description=OneTimeSecret - Server
[Container]
Pod=onetimesecret.pod
ContainerName=onetimesecret
Image=docker.io/onetimesecret/onetimesecret:latest
AutoUpdate=registry
# Environment=SSL=
# Environment=SECRET=
# Environment=HOST=
# Environment=REDIS_URL=
# Auth
# Disabled because you cant persist accounts between container restarts, actually retarded.
# Environment=AUTH_REQUIRED=
# Environment=AUTH_SIGNUP=
# Environment=AUTH_SIGNIN=
# Environment=AUTH_AUTOVERIFY=
# Environment=COLONEL=
# SMTP
# Environment=SMTP_HOST=
# Environment=SMTP_PORT=
# Environment=FROM_EMAIL=
# Environment=FROMNAME=
# Environment=SMTP_USERNAME=
# Environment=SMTP_PASSWORD=
# Environment=SMTP_TLS=
# Environment=SMTP_AUTH=
# Environment=VERIFIER_EMAIL=
EnvironmentFile=/mnt/data/containers/onetimesecret/.env.onetimesecret
Label=homepage.group=Life
Label=homepage.name=OneTimeSecret
Label=homepage.icon=onetimesecret.png
Label=homepage.href=https://ots.inkletblot.com
Label=homepage.description="Like Privnote"
[Service]
Restart=always
RestartSec=5
StartLimitBurst=5
[Install]
WantedBy=default.target

View File

@ -0,0 +1,11 @@
[Unit]
Description=OneTimeSecret network
After=network-online.target
[Network]
NetworkName=onetimesecret-network
Subnet=10.41.0.0/24
Gateway=10.41.0.1
[Install]
WantedBy=default.target

View File

@ -0,0 +1,4 @@
[Pod]
Network=onetimesecret.network
PodName=onetimesecret
PublishPort=3080:3000

37
openhab/openhab.container Normal file
View File

@ -0,0 +1,37 @@
[Unit]
Description=OpenHAB
Wants=network-online.target
After=network-online.target
[Container]
ContainerName=openhab
Image=docker.io/openhab/openhab:5.0.3
AutoUpdate=registry
Network=host
# Environment=CRYPTO_POLICY=
# Environment=EXTRA_JAVA_OPTS=
# Environment=OPENHAB_HTTP_PORT=
# Environment=OPENHAB_HTTPS_PORT=
EnvironmentFile=/mnt/data/containers/openhab/.env.openhab
Volume=/etc/localtime:/etc/localtime:ro
Volume=/etc/timezone:/etc/timezone:ro
Volume=/mnt/data/containers/openhab/conf:/openhab/conf
Volume=/mnt/data/containers/openhab/userdata:/openhab/userdata
Volume=/mnt/data/containers/openhab/addons:/openhab/addons
Volume=/mnt/data/containers/openhab/.java:/openhab/.java
Label=homepage.group=Life
Label=homepage.name=OpenHAB
Label=homepage.icon=openhab.png
Label=homepage.href=https://hab.inkletblot.com
Label=homepage.description="Home Automation"
[Service]
Restart=always
TimeoutStartSec=900
[Install]
WantedBy=default.target

View File

@ -0,0 +1,36 @@
[Unit]
Description=Photoprism - DB
Wants=network-online.target
After=network-online.target
[Container]
ContainerName=photoprism-db
Pod=photoprism.pod
Image=docker.io/mariadb:11.8
AutoUpdate=registry
# Persistent volumes
Volume=/mnt/data/containers/photoprism/db:/var/lib/mysql
# Environment variables
# Environment=MARIADB_USER=
# Environment=MARIADB_DATABASE=
# Environment=MARIADB_PASSWORD=
# Environment=MARIADB_ROOT_PASSWORD=
EnvironmentFile=/mnt/data/containers/photoprism/.env.photoprism-db
# Health monitoring
HealthCmd=healthcheck.sh --connect
HealthInterval=30s
HealthTimeout=15s
HealthRetries=10
HealthStartPeriod=15s
# Other
UserNS=keep-id:uid=999,gid=999
[Service]
Restart=on-failure
RestartSec=5
StartLimitBurst=5

View File

@ -0,0 +1,40 @@
[Unit]
Description=Photoprism
Wants=photoprism-db.service
After=photoprism-db.service
[Container]
ContainerName=photoprism
Pod=photoprism.pod
Image=docker.io/photoprism/photoprism:latest
AutoUpdate=registry
# Environment=PHOTOPRISM_UID=
# Environment=PHOTOPRISM_GID=
# Environment=PHOTOPRISM_UPLOAD_NSFW=
# Environment=PHOTOPRISM_ADMIN_PASSWORD=
# Environment=PHOTOPRISM_DATABASE_DRIVER=
# Environment=PHOTOPRISM_DATABASE_USER=
# Environment=PHOTOPRISM_DATABASE_NAME=
# Environment=PHOTOPRISM_DATABASE_SERVER=
# Environment=PHOTOPRISM_DATABASE_PASSWORD=
EnvironmentFile=/mnt/data/containers/photoprism/.env.photoprism-server
Volume=/mnt/data/containers/photoprism/storage:/photoprism/storage
Volume=/mnt/photo/Library:/photoprism/originals
Volume=/mnt/photo/Import:/photoprism/import
Label=homepage.group=Media
Label=homepage.name=Photoprism
Label=homepage.icon=photoprism.png
Label=homepage.href=http://photoprism.forest:2342
Label=homepage.description="Photo Library"
[Service]
Restart=always
TimeoutStartSec=900
SuccessExitStatus=0 143
[Install]
WantedBy=default.target

View File

@ -0,0 +1,12 @@
[Unit]
Description=Photoprism network
After=network-online.target
[Network]
NetworkName=photoprism-network
Subnet=10.6.0.0/24
Gateway=10.6.0.1
DNS=
[Install]
WantedBy=default.target

View File

@ -0,0 +1,4 @@
[Pod]
Network=photoprism.network
PodName=photoprism
PublishPort=2342:2342

View File

@ -0,0 +1,23 @@
[Unit]
Description=Planka - DB
[Container]
Pod=planka.pod
ContainerName=planka-db
Image=docker.io/postgres:16-alpine
# Environment=POSTGRES_PASSWORD=
# Environment=POSTGRES_USER=
# Environment=POSTGRES_DB=
EnvironmentFile=/mnt/data/containers/planka/.env.planka-db
Volume=/mnt/data/containers/planka/postgresql:/var/lib/postgresql/data
Volume=/etc/timezone:/etc/timezone:ro
Volume=/etc/localtime:/etc/localtime:ro
[Service]
Restart=always
TimeoutStartSec=300
[Install]
WantedBy=default.target

View File

@ -0,0 +1,32 @@
[Unit]
Description=Planka - Server
Requires=planka-db.service
After=planka-db.service
[Container]
Pod=planka.pod
ContainerName=planka-server
Image=ghcr.io/plankanban/planka:2.0.0-rc.4
# Environment=BASE_URL=
# Environment=DATABASE_URL=
# Environment=SECRET_KEY=
EnvironmentFile=/mnt/data/containers/planka/.env.planka-server
Volume=/mnt/data/containers/planka/favicons:/app/public/favicons
Volume=/mnt/data/containers/planka/user-avatars:/app/public/user-avatars
Volume=/mnt/data/containers/planka/background-images:/app/public/background-images
Volume=/mnt/data/containers/planka/attachments:/app/private/attachments
Label=homepage.group=Productivity
Label=homepage.name=Planka
Label=homepage.icon=planka.png
Label=homepage.href=https://planka.inkletblot.com
Label=homepage.description="Kanban Board"
[Service]
Restart=always
TimeoutStartSec=300
[Install]
WantedBy=default.target

12
planka/planka.network Normal file
View File

@ -0,0 +1,12 @@
[Unit]
Description=Planka network
After=network-online.target
[Network]
NetworkName=planka-network
Subnet=10.14.0.0/24
Gateway=10.14.0.1
DNS=
[Install]
WantedBy=default.target

5
planka/planka.pod Normal file
View File

@ -0,0 +1,5 @@
[Pod]
Network=planka.network
PodName=planka
PublishPort=1337:1337

43
plex/plex.container Normal file
View File

@ -0,0 +1,43 @@
[Unit]
Description=Plex Media Server
Wants=network-online.target
After=network-online.target
After=local-fs.target
[Container]
Pod=plex.pod
ContainerName=plex
Image=docker.io/plexinc/pms-docker:latest
AutoUpdate=registry
# Environment=PLEX_CLAIM=
# Environment=PLEX_UID=
# Environment=PLEX_GID=
# Environment=ADVERTISE_IP=
# Environment=ALLOWED_NETWORKS=
EnvironmentFile=/mnt/data/containers/plex/.env.plex
Volume=/mnt/data/containers/plex/config:/config:Z
Volume=/mnt/data/containers/plex/trans:/transcode
Volume=/mnt/video/movies:/movies
Volume=/mnt/video/tv:/tv
Volume=/mnt/video/anime:/anime
Volume=/mnt/audio/Sorted:/music
Volume=/mnt/audio/Audio Books:/books
# don't have cpu features accessible here
# for hardware transcoding
# AddDevice=/dev/dri
Label=homepage.group=Media
Label=homepage.name=Plex
Label=homepage.icon=plex.png
Label=homepage.href=http://plex.forest:32400/web/
Label=homepage.description="Stream Media"
[Service]
Restart=always
TimeoutStartSec=90
[Install]
WantedBy=default.target

12
plex/plex.network Normal file
View File

@ -0,0 +1,12 @@
[Unit]
Description=Plex network
After=network-online.target
[Network]
NetworkName=plex-network
Subnet=10.38.0.0/24
Gateway=10.38.0.1
DNS=
[Install]
WantedBy=default.target

15
plex/plex.pod Normal file
View File

@ -0,0 +1,15 @@
[Pod]
PodName=plex
Network=plex.network
PublishPort=32400:32400/tcp
# theese are all optional and I don't care.
#PublishPort=1900:1900/udp
#PublishPort=3005:3005/tcp
#PublishPort=5353:5353/udp
#PublishPort=8324:8324/tcp
#PublishPort=32410:32410/udp
#PublishPort=32412:32412/udp
#PublishPort=32413:32413/udp
#PublishPort=32414:32414/udp
#PublishPort=32469:32469/tcp

37
plucky-pinning.sh Executable file
View File

@ -0,0 +1,37 @@
#!/bin/bash
# Must be run as root
if [ "$EUID" -ne 0 ]; then
echo "Please run as root (e.g., sudo $0)"
exit 1
fi
# Define file paths
PINNING_FILE="/etc/apt/preferences.d/podman-plucky.pref"
SOURCE_LIST="/etc/apt/sources.list.d/plucky.list"
# Write Plucky APT source list
echo "Adding Plucky repo to $SOURCE_LIST..."
echo "deb http://archive.ubuntu.com/ubuntu plucky main universe" > "$SOURCE_LIST"
# Write APT pinning rules
echo "Writing APT pinning rules to $PINNING_FILE..."
cat <<EOF > "$PINNING_FILE"
Package: podman buildah golang-github-containers-common crun libgpgme11t64 libgpg-error0 golang-github-containers-image catatonit conmon containers-storage
Pin: release n=plucky
Pin-Priority: 991
Package: libsubid4 netavark passt aardvark-dns containernetworking-plugins libslirp0 slirp4netns
Pin: release n=plucky
Pin-Priority: 991
Package: *
Pin: release n=plucky
Pin-Priority: 400
EOF
# Update APT cache
echo "Updating APT package list..."
apt update
echo "Plucky pinning setup complete."

View File

@ -0,0 +1,36 @@
[Unit]
Description=Roundcube - DB
Wants=network-online.target
After=network-online.target
[Container]
Pod=roundcube.pod
ContainerName=roundcube-db
Image=docker.io/mariadb:11.8
AutoUpdate=registry
# Persistent volumes
Volume=/mnt/data/containers/roundcube/mariadb:/var/lib/mysql
# Environment variables
# Environment=MARIADB_USER=
# Environment=MARIADB_DATABASE=
# Environment=MARIADB_PASSWORD=
# Environment=MARIADB_ROOT_PASSWORD=
EnvironmentFile=/mnt/data/containers/roundcube/.env.roundcube-db
# Health monitoring
HealthCmd=healthcheck.sh --connect
HealthInterval=30s
HealthTimeout=15s
HealthRetries=10
HealthStartPeriod=15s
# Other
UserNS=keep-id:uid=999,gid=999
[Service]
Restart=on-failure
RestartSec=5
StartLimitBurst=5

View File

@ -0,0 +1,38 @@
[Unit]
Description=Roundcube - Server
Requires=roundcube-db.service
After=roundcube-db.service
[Container]
Pod=roundcube.pod
ContainerName=roundcube-server
Image=docker.io/roundcube/roundcubemail:latest
# Environment=ROUNDCUBEMAIL_DEFAULT_HOST=
# Environment=ROUNDCUBEMAIL_DEFAULT_PORT=
# Environment=ROUNDCUBEMAIL_SMTP_SERVER=
# Environment=ROUNDCUBEMAIL_SMTP_PORT=
# Environment=ROUNDCUBEMAIL_USERNAME_DOMAIN=
# Environment=ROUNDCUBEMAIL_DB_TYPE=
# Environment=ROUNDCUBEMAIL_DB_HOST=
# Environment=ROUNDCUBEMAIL_DB_PORT=
# Environment=ROUNDCUBEMAIL_DB_USER=
# Environment=ROUNDCUBEMAIL_DB_PASSWORD=
# Environment=ROUNDCUBEMAIL_DB_NAME=
EnvironmentFile=/mnt/data/containers/roundcube/.env.roundcube-server
Volume=/mnt/data/containers/roundcube/config:/var/roundcube/config
Label=homepage.group=Productivity
Label=homepage.name=Roundcube
Label=homepage.icon=roundcube.png
Label=homepage.href=https://mail.inkletblot.com
Label=homepage.description="Mail Client"
[Service]
Restart=always
TimeoutStartSec=300
[Install]
WantedBy=default.target

View File

@ -0,0 +1,12 @@
[Unit]
Description=Roundcube network
After=network-online.target
[Network]
NetworkName=roundcube-network
Subnet=10.17.0.0/24
Gateway=10.17.0.1
DNS=
[Install]
WantedBy=default.target

5
roundcube/roundcube.pod Normal file
View File

@ -0,0 +1,5 @@
[Pod]
Network=roundcube.network
PodName=roundcube
PublishPort=8567:80

View File

@ -0,0 +1,28 @@
[Unit]
Description=Solve Cloudflare Challenges
Wants=network-online.target
# Wants=servarr-gluetun.service
After=network-online.target
After=local-fs.target
# After=servarr-gluetun.service
[Container]
Pod=servarr.pod
ContainerName=flaresolverr
Image=ghcr.io/flaresolverr/flaresolverr:latest
AutoUpdate=registry
# Network=container:servarr-gluetun
Label=homepage.group=Arr
Label=homepage.name=Flaresolverr
Label=homepage.icon=flaresolverr.png
Label=homepage.description="Solve Cloudflare Challenges"
[Service]
Restart=on-failure
TimeoutStartSec=90
[Install]
WantedBy=default.target

44
servarr/gluetun.container Normal file
View File

@ -0,0 +1,44 @@
[Unit]
Description=VPN Network Tunnle
Wants=network-online.target
After=network-online.target
After=local-fs.target
[Container]
ContainerName=gluetun
Image=docker.io/qmcgaw/gluetun
AutoUpdate=registry
AddDevice=/dev/net/tun
AddCapability=NET_ADMIN
AddCapability=NET_RAW
# qbittorrent
PublishPort=9191:9191
# gluetun
PublishPort=8888:8888
# Environment=VPN_SERVICE_PROVIDER=
# Environment=VPN_TYPE=
# Environment=WIREGUARD_PRIVATE_KEY=
# Environment=SERVER_COUNTRIES=
# Environment=VPN_PORT_FORWARDING=
# Environment=HTTP_CONTROL_SERVER_AUTH_DEFAULT_ROLE=
EnvironmentFile=/mnt/data/containers/servarr/gluetun/.env.gluetun
Volume=/mnt/data/containers/servarr/gluetun/config:/gluetun:Z
Label=homepage.group=Arr
Label=homepage.name=Gluetun
Label=homepage.href=http://gluetun:8888
Label=homepage.icon=gluetun.png
Label=homepage.description="VPN Tunnle"
[Service]
Restart=on-failure
TimeoutStartSec=90
[Install]
WantedBy=default.target

38
servarr/lidarr.container Normal file
View File

@ -0,0 +1,38 @@
[Unit]
Description=Automate Music
Wants=network-online.target
# Wants=servarr-gluetun.service
After=network-online.target
After=local-fs.target
# After=servarr-gluetun.service
[Container]
Pod=servarr.pod
ContainerName=lidarr
Image=ghcr.io/hotio/lidarr
AutoUpdate=registry
# Network=container:servarr-gluetun
# Environment=PUID=
# Environment=PGID=
# Environment=TZ=
EnvironmentFile=/mnt/data/containers/servarr/lidarr/.env.lidarr
Volume=/mnt/data/containers/servarr/lidarr/config:/config:Z
Volume=/mnt/audio/Sorted:/data/music
Volume=/mnt/data/downloads:/downloads
Label=homepage.group=Arr
Label=homepage.name=Lidarr
Label=homepage.icon=lidarr.png
Label=homepage.href=http://192.168.2.61:8686
Label=homepage.description="Automate Music"
[Service]
Restart=on-failure
TimeoutStartSec=90
[Install]
WantedBy=default.target

View File

@ -0,0 +1,36 @@
[Unit]
Description=Automate Media Management
Wants=network-online.target
# Wants=servarr-gluetun.service
After=network-online.target
After=local-fs.target
# After=servarr-gluetun.service
[Container]
Pod=servarr.pod
ContainerName=overseerr
Image=docker.io/sctx/overseerr
AutoUpdate=registry
# Network=container:servarr-gluetun
# Environment=LOG_LEVEL=
# Environment=TZ=
# Environment=PORT=
EnvironmentFile=/mnt/data/containers/servarr/overseerr/.env.overseerr
Volume=/mnt/data/containers/servarr/overseerr/config:/app/config:Z
Label=homepage.group=Arr
Label=homepage.name=Overseerr
Label=homepage.icon=overseerr.png
Label=homepage.href=https://seer.inkletblot.com
Label=homepage.description="Request Media"
[Service]
Restart=on-failure
TimeoutStartSec=90
[Install]
WantedBy=default.target

View File

@ -0,0 +1,31 @@
[Unit]
Description=Auto profiles for sonarr/radarr
[Container]
Pod=servarr.pod
ContainerName=profilarr
Image=docker.io/santiagosayshey/profilarr:latest
AutoUpdate=registry
# Network=container:gluetun
# Environment=PUID=
# Environment=PGID=
# Environment=TZ=
EnvironmentFile=/mnt/data/containers/servarr/profilarr/.env.profilarr
Volume=/mnt/data/containers/servarr/profilarr/config:/config:Z
Label=homepage.group=Arr
Label=homepage.name=Profilarr
Label=homepage.icon=profilarr.png
Label=homepage.href=http://profilerr.forest
Label=homepage.description="Media profiles for arrs"
[Service]
Restart=on-failure
TimeoutStartSec=90
[Install]
WantedBy=default.target

View File

@ -0,0 +1,36 @@
[Unit]
Description=Manage indexers
Wants=network-online.target
# Wants=servarr-gluetun.service
After=network-online.target
After=local-fs.target
# After=servarr-gluetun.service
[Container]
Pod=servarr.pod
ContainerName=prowlarr
Image=ghcr.io/hotio/prowlarr:release-2.0.5.5160
AutoUpdate=registry
# Network=container:servarr-gluetun
# Environment=PUID=
# Environment=PGID=
# Environment=TZ=
EnvironmentFile=/mnt/data/containers/servarr/prowlarr/.env.prowlarr
Volume=/mnt/data/containers/servarr/prowlarr/config:/config:Z
Label=homepage.group=Arr
Label=homepage.name=Prowlarr
Label=homepage.icon=prowlarr.png
Label=homepage.href=http://prowlarr.forest
Label=homepage.description="Manage indexers"
[Service]
Restart=on-failure
TimeoutStartSec=90
[Install]
WantedBy=default.target

View File

@ -0,0 +1,38 @@
[Unit]
Description=Torrent Client
Wants=network-online.target
Wants=gluetun.service
After=network-online.target
After=local-fs.target
After=gluetun.service
[Container]
ContainerName=qbittorrent
Image=lscr.io/linuxserver/qbittorrent:latest
AutoUpdate=registry
Network=container:gluetun
# Environment=PUID=
# Environment=PGID=
# Environment=TZ=
# Environment=WEBUI_PORT=
# Environment=TORRENTING_PORT=
EnvironmentFile=/mnt/data/containers/servarr/qbittorrent/.env.qbittorrent
Volume=/mnt/data/containers/servarr/qbittorrent/config:/config
Volume=/mnt/data/downloads:/downloads
Label=homepage.group=Arr
Label=homepage.name=qBittorrent
Label=homepage.icon=qbittorrent.png
Label=homepage.href=http://qbittorrent:9191
Label=homepage.description="Automate Downloads"
[Service]
Restart=always
TimeoutStartSec=90
[Install]
WantedBy=default.target

38
servarr/radarr.container Normal file
View File

@ -0,0 +1,38 @@
[Unit]
Description=Automate Movies
Wants=network-online.target
# Wants=servarr-gluetun.service
After=network-online.target
After=local-fs.target
# After=servarr-gluetun.service
[Container]
Pod=servarr.pod
ContainerName=radarr
Image=ghcr.io/hotio/radarr
AutoUpdate=registry
# Network=container:servarr-gluetun
# Environment=PUID=
# Environment=PGID=
# Environment=TZ=
EnvironmentFile=/mnt/data/containers/servarr/radarr/.env.radarr
Volume=/mnt/data/containers/servarr/radarr/config:/config:Z
Volume=/mnt/video/movies:/data/movies
Volume=/mnt/data/downloads:/downloads
Label=homepage.group=Arr
Label=homepage.name=Radarr
Label=homepage.icon=radarr.png
Label=homepage.href=http://radarr.forest
Label=homepage.description="Automate Movies"
[Service]
Restart=always
TimeoutStartSec=90
[Install]
WantedBy=default.target

36
servarr/sabnzbd.container Normal file
View File

@ -0,0 +1,36 @@
[Unit]
Description=NZBD Client
Wants=network-online.target
# Wants=servarr-gluetun.service
After=network-online.target
After=local-fs.target
# After=servarr-gluetun.service
[Container]
Pod=servarr.pod
ContainerName=sabnzbd
Image=lscr.io/linuxserver/sabnzbd:latest
AutoUpdate=registry
# Environment=PUID=
# Environment=PGID=
# Environment=TZ=
EnvironmentFile=/mnt/data/containers/servarr/sabnzbd/.env.sabnzbd
Volume=/mnt/data/containers/servarr/sabnzbd/config:/config
Volume=/mnt/data/downloads:/downloads
Volume=/mnt/data/incomplete-downloads:/incomplete-downloads
Label=homepage.group=Arr
Label=homepage.name=SABnzbd
Label=homepage.icon=sabnzbd.png
Label=homepage.href=http://sabnzbd.forest
Label=homepage.description="Automate nzb Downloads"
[Service]
Restart=always
TimeoutStartSec=90
[Install]
Wantedby=default.target

Some files were not shown because too many files have changed in this diff Show More