I’m a retired Unix admin. It was my job from the early '90s until the mid '10s. I’ve kept somewhat current ever since by running various machines at home. So far I’ve managed to avoid using Docker at home even though I have a decent understanding of how it works - I stopped being a sysadmin in the mid '10s, I still worked for a technology company and did plenty of “interesting” reading and training.

It seems that more and more stuff that I want to run at home is being delivered as Docker-first and I have to really go out of my way to find a non-Docker install.

I’m thinking it’s no longer a fad and I should invest some time getting comfortable with it?

  • originalucifer@moist.catsweat.com
    link
    fedilink
    arrow-up
    1
    ·
    11 months ago

    dude, im kinda you. i just jumped into docker over the summer… feel stupid not doing it sooner. there is just so much pre-created content, tutorials, you name it. its very mature.

    i spent a weekend containering all my home services… totally worth it and easy as pi[hole] in a container!.

  • buedi@feddit.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    I would absolutely look into it. Many years ago when Docker emerged, I did not understand it and called it “Hipster shit”. But also a lot of people around me who used Docker at that time did not understand it either. Some lost data, some had servicec that stopped working and they had no idea how to fix it.

    Years passed and Containers stayed, so I started to have a closer look at it, tried to understand it. Understand what you can do with it and what you can not. As others here said, I also had to learn how to troubleshoot, because stuff now runs inside a container and you don´t just copy a new binary or library into a container to try to fix something.

    Today, my homelab runs 50 Containers and I am not looking back. When I rebuild my Homelab this year, I went full Docker. The most important reason for me was: Every application I run dockerized is predictable and isolated from the others (from the binary side, network side is another story). The issues I had earlier with my Homelab when running everything directly in the Box in Linux is having problems when let´s say one application needs PHP 8.x and another, older one still only runs with PHP 7.x. Or multiple applications have a dependency of a specific library when after updating it, one app works, the other doesn´t anymore because it would need an update too. Running an apt upgrade was always a very exciting moment… and not in a good way. With Docker I do not have these problems. I can update each container on its own. If something breaks in one Container, it does not affect the others.

    Another big plus is the Backups you can do. I back up every docker-compose + data for each container with Kopia. Since barely anything is installed in Linux directly, I can spin up a VM, restore my Backups withi Kopia and start all containers again to test my Backup strategy. Stuff just works. No fiddling with the Linux system itself adjusting tons of Config files, installing hundreds of packages to get all my services up and running again when I have a hardware failure.

    I really started to love Docker, especially in my Homelab.

    Oh, and you would think you have a big resource usage when everything is containerized? My 50 Containers right now consume less than 6 GB of RAM and I run stuff like Jellyfin, Pi-Hole, Homeassistant, Mosquitto, multiple Kopia instances, multiple Traefik Instances with Crowdsec, Logitech Mediaserver, Tandoor, Zabbix and a lot of other things.

    • MaximilianKohler@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      It seems like docker would be heavy on resources since it installs & runs everything (mysql, nginx, etc.) numerous times (once for each container), instead of once globally. Is that wrong?

      • buedi@feddit.de
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        You would think so, yes. But to my surprise, my well over 60 Containers so far consume less than 7 GB of RAM, according to htop. Also, of course Containers can network and share services. For external access for example I run only one instance of traefik. Or one COTURN for Nextcloud and Synapse.

  • P1r4nha@feddit.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    Definitely not a fad. It’s used all over the industry. It gives you a lot more control over the environment where your hosted apps run. There may be some overhead, but it’s worth it.

    • Dyskolos@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 months ago

      Not OP, but, seriously asking, why should I? I usually still use VMs for every app i need. Much more work I assume, but besides saving time (and some overhead and mayve performance) what would I gain from docker or other containers?

      • DefederateLemmyMl@feddit.nl
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 months ago

        what would I gain from docker or other containers?

        Reproducability.

        Once you’ve built the Dockerfile or compose file for your container, it’s trivial to spin it up on another machine later. It’s no longer bound to the specific VM and OS configuration you’ve built your service on top of and you can easily migrate containers or move them around.

          • twei@feddit.de
            link
            fedilink
            English
            arrow-up
            0
            ·
            11 months ago

            If you update your OS, it could happen that a changed dependency breaks your app. This wouldn’t happen with docker, as every dependency is shipped with the application in the container.

  • irotsoma@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    Docker is nice for things that have complex installations and I want a very specific implementation that I don’t plan to tweak very much. Otherwise, it’s more hassle than it’s worth. There are lots of networking issues like limited/experimental support for IPv6, and too much is hidden and preconfigured, making it difficult to make adjustments that would otherwise just be a config file change.

    So it is good for products like a mail server where you want to use the exact software they use like let’s say postfix + dovecot + roundcube + nginix + acme + MySQL + spam assassin + amavisd, etc. But you want to use an existing reverse proxy and cert it setup, or want to use a different spam filter or database and it becomes a huge hassle.

  • Outcide@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    Another old school sysadmin that “retired” in the early 2010s.

    Yes, use docker-compose. It’s utterly worth it.

    I was intensely irritated at first that all of my old troubleshooting tools were harder to use and just generally didn’t trust it for ages, but after 5 years I wouldn’t be without.

    • DasGurke@feddit.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 months ago

      I’m a little younger but in the same boat. There is some friction having filesystems, ports and processes “hidden” from your hosts programs that you typically rely on. But I needed them sooooo much less now that all my services are in Docker with exactly matching dependencies instead of rolling my eyes about running two PostgreSQL servers in different versions or juggling Python / node / Ruby versions with ASDF.

  • lefaucet@slrpnk.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    i use it for gitea, nextcloud, redis, postgres, and a few rest servers and love it!, super easy

    it can suck for things like homelab stablediffusion and things that require gpu or other hardware.

    • Aiyub@feddit.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 months ago

      As someone who does AI for a living: GPU+docker is easy and reliable. Especially if you compare it to VMs.