The last post on the subject I could find was a year ago. So thought I would ask again. I have debian 12 up on miniPC and I have my NAS mounted. My intention is to use jellyfin and some of the arr* stuff. I know only a little about systemd (I just google what I need to know). I have some contianer knowledge, but mostly in k8s. And the docker parts aren’t really my problem. But I have a vague understanding of docker. What are the latest pros and cons of containers vs service installation?
My very limited takes:
- I run everything inside containers ,+ traefik (*arr + jelly)
- I have no issues with dependencies. Ever
- Linuxserver.io is the shit. Real bros!
Having recently switched from baremetal Ubuntu Jellyfin to Docker, use docker. With Docker I know exactly where Jellyfin is storing my data because I tell it where. This means I can move it and spin it up from any machine the same every time. Moving my files over from Ubuntu was painful because they store it in weird locations and it’s spread across the file system, plus my media mount locations are different. This would have not been a problem on Docker.
It’s not the ‘one click’ solution some people claim, but the up front trouble is worth it for easier management, in my opinion.
For the arr stack, I run Jellyfin-server, radarr, prowlarr, jellyseer and sonarr in containers using docker compose. For updates, I just crontab a script once a week that does a “docker compose down && docker compose pull && docker compose up -d” in each of the compose directories.
Bit of faff setting everything up, but once it’s done, it’s very solid.
There are different ways to run container. I run them via podman-systemd services. For me, the main benefits of running a container over an executable on the host system are the following:
- not everything I want to self-host is packaged for my distro, but they all have container images available
- operating system updates are independent from application updates, application updates are independent from each other. One broken dependency won’t kill my entire stack
- all containers are running without root privileges and with restricted access to the host system. One vulnerable application won’t give access to my entire system
- I can have all my config in one directory (
~/.config/containers/systemd/
), instead of having them across multiple/etc/*
directories - volume bind mounts make it easy to declaratively mount any folder anywhere, so I can keep my directory structure how I like it
- cockpit offers a great UI to visualize my hosted applications
Cannot recommend container approach enough. The learning curve isn’t too bad, initially it can be daunting but best way is to jump straight in and try it.
Few things I recommend:
- Portainer, a very nice container management webapp
- Use compose /stack from day 1, or at least try it before you get carried away with too many containers. Take a copy and save somewhere and build up your catalogue of containers/configs.
- Volumes, make sure they are persistent.
- Backup your docker config folders, especially if using development branches.
- Spend a day/weekend playing with setup, expecting to throw away and start again. Sounds bad but it’s not. If you use compose/stacks you can spin up in seconds.
Please use Dockge instead of Portainer.
Dockge makes it much easier to actually see what’s happening in the deployment process and debug any issues, instead of presenting the error on a small popup that vanishes after 0.3 seconds, and it gives you much better feedback when you misconfigure something in your compose file. It also makes it much easier to interact with your setup from the command line once you feel comfortable doing that. And the builtin docker run to docker compose feature is really handy.
Newbies will find Dockge much friendlier, and experienced users will find that it respects their processes and gets out of the way when you want it out of the way.
I am by no means an expert on this, but I find containerization/docker advantageous for two reasons:
-
It’s (relatively) easy to configure and spin up a container to try something out and/or put it into production. I prefer it with docker compose but you’ve got straight CLI options, GUI options like portainer, or OS deployments like yunohost or proxmox.
-
The isolation and dependency management. Everything you need is in the container. No dependency conflicts with other things running on the system. And removing a container leaves the system nice and clean. Just prune your images and volumes and it’s like it was never there.
Edit: grammar
-
Personally, I use containers for ease and simplicity of updates for all my various server apps. You can use k8s to run your docker containers, but since it’s just all on one PC, I use docker compose for everything.
The pro/con has more to do with how you want to run your system and manage changes.
Containerization is primarily about repeatability and declarative configuration management. If you want to repeat the same configuration with every deployment and/or upgrade, containers are the way to go.
If you want to tweak and manage the software the way you want it and aren’t concerned with configuration drift, then install it as a service.