Being a noob and all I was wondering whats the real benefit of having a monolithic lets say proxmox instance with router, DNS, VPN but also home asssistant and NAS functionalitiy all in one server? I always thought dedicated devices are simpler to maintain or replace and some services are also more critical than others I guess?

  • shnizmuffin@lemmy.inbutts.lol
    link
    fedilink
    English
    arrow-up
    22
    ·
    edit-2
    4 months ago

    Use containers. Start with one device. Check your utilization after you’re sure you’ve hit min and max for each of your services, then figure out if your single device can handle all your services gunning at once. If not, take your biggest service and migrate it to its own device.

    Eventually, you might find yourself googling “Kubernetes vs Docker Swarm.” When you do that, take a deep breath and decide if upgrading one device is easier than trying to horizontally scale many.

    Edit: Words bad. Verbs hard.

    • Justin@lemmy.jlh.name
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      4 months ago

      Yeah, definitely go with a single machine for containers if you haven’t seen a need for disaggregation. Even a cheap Aliexpress N100 box is super capable.

      Regarding the jump to Kubernetes, I will point out that Kubernetes is a tool for container orchestration and automation, not necessarily a container cluster. I have found many benefits from using Kubernetes on a single node, so I wouldn’t consider container clustering a prerequisite for Kubernetes.

  • Justin@lemmy.jlh.name
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    edit-2
    4 months ago

    Pretty much the tradeoff that you said. Harder to maintain an all in one box since things conflict with each other. That said, it’s also harder to maintain 10 devices instead of 2. Usually, you want to segregate your services based on maintenance schedule. Something that you reboot once a year like your router probably shouldn’t be on the same device as something that you might reboot every day, like home assistant, if you value your sanity.

    Also, virtualization is pretty much dead-end now and will just make your life harder.

    In terms of the easiest software available for self hosting, I would use a dedicated router and a dedicated nas, as those are fairly standalone and can be purchased as appliances. Then I would use a single machine with Debian or NixOS, and use it as a Kubernetes or Docker host. (Kubernetes is super easy with k3s and easier to maintain than Docker, but there’s a higher barrier to entry as you’d have to write your services with Pod files instead of docker-compose files)

    I wouldn’t recommend something that tries to do everything, like Unraid, TrueNAS, or Proxmox, as they honestly obfuscate things and make things harder to maintain. Though they can be nice for DIY NASes.

    If you’re interested in high availability and clustering for a DIY NAS, you could even look into ceph/rook, which is what I’m using for my NAS, but it’s like 20x the effort of just having a standard NFS appliance.

    • thirdBreakfast@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 months ago

      Yep, I think there’s sound arguments for separating out your storage (NAS) and network (router/DNS/PiHole) infrastructure. After that, whatever suits your purpose. I virtualise all my serious services on one machine under Proxmox (mostly for ease of snapshots) then have another machine for things I’m fiddling with, usually again under Proxmox so they are easy to move to production when I’m happy with them.

      • Justin@lemmy.jlh.name
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        4 months ago

        Makes sense. I would probably recommend more infrastructure-as-code workflows over snapshots, like ArgoCD or docker-compose, as git commits are simpler than VM snapshots. But both ways work.

    • atzanteol@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      4 months ago

      Kubernetes is super easy with k3s and easier to maintain than Docker

      I don’t think I’ve ever heard anyone say this… Kubernetes is a massive pain in the ass to learn, maintain and troubleshoot. If you find it easy that’s great, but it’s not for everyone.

      • Justin@lemmy.jlh.name
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        4 months ago

        I mean that with k3s you can get a kubernetes cluster running with 0 effort on a single machine. It is easier to maintain, because it handles restarting containers, updating containers, managing ports, provisioning storage, creating databases, etc for you. I’ve found the logs and events system to be super useful for troubleshooting compared to Dockerd, but maybe it can be tricky if it does something you don’t expect it to.

        Obviously you need to learn how to use that automation to take advantage of it, and stuff like networking and persistent volumes can be confusing if you don’t have a good guide on it. The fact that there are different drivers for networking, storage, database management, etc can also take a bit of time. That said, networking and storage can be confusing on Docker too if you don’t have a good guide, and Docker-compose also has a learning curve, so I honestly don’t think Kubernetes is that much more effort. The main thing is that most guides are written for Docker, but the Kubernetes documentation is really good too.

        If you just want to just run containers for jellyfin and home-assistant, Docker compose will be good enough. But if you want databases, reverse proxy, certificates, dns, self-healing, etc, for running bigger stuff like nextcloud and lemmy, then I would spend the extra 50% effort and do it on Kubernetes, it’ll save you time and headaches in the long run.

        Asking an LLM like Lllama or ChatGPT might be a good way to learn the basics with Kubernetes, but things move fast once you start getting into the newest operators like CNPG and Gateway API.

        • lemmyvore@feddit.nl
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          4 months ago

          I do all that with docker… I fail to see what Kubernetes adds to that on a single machine.

          • Justin@lemmy.jlh.name
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            4 months ago

            Kubernetes does it a lot better. No more messing with caddy config files, or docker sockets, you get the real deal, production stuff.

            Containers automatically take themselves off the built-in loadbalancer and/or restart when they fail a health check.

            A new high-availability postgres cluster with automatic backups is just a Cluster, a firewall rule is just a NetworkPolicy, a new subdomain is just an HTTPRoute, a new proxy container is just a Gateway, a new auto-renewed Let’s Encrypt certificate is just a Certificate, and DNS is set up automatically with the domain name from the HTTPRoute without me touching anything. Everything is high-availability and self-healing, I’ve never had anything go down or crash.

            The other thing is ArgoCD, which automatically syncs your cluster with git. If I edit any of my config files in git, it is instantly updated on the cluster itself.

            Here is my configuration for my 200+ containers, even my Lemmy instance is running here: https://codeberg.org/jlh/h5b/src/branch/main/argo/custom_applications

            Docker and the Docker ecosystem copies a lot of features from Kubernetes, because they’re essentially the same thing, but Kubernetes does it in a production-ready, maintainable way. Kubernetes is an automation tool that lets 1 engineer do the work of 10.

            • lemmyvore@feddit.nl
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              3
              ·
              4 months ago

              Right, right, you just have to reinvent a dozen wheels, use only software that Kubernetes knows how to work with, and learn a bunch of new names for everything.

              • keyez@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                4 months ago

                Once you learn it it isn’t super crazy but takes a lot of effort obviously. I think most people who do use k3s and k8s at home are people who use it for work so already knows how and where things should work and be. That said I work with kubernetes every day for work managing a handful of giant production clusters and at home I use unraid to keep it simple.

    • shnizmuffin@lemmy.inbutts.lol
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      4 months ago

      I’m running an Unraid server. You can pop in and manage everything with the CLI like you would on traditional server OSes and it’ll show your containers, images, orphans etc. in the GUI and throws alerts out of the box for utilization thresholds and power events. It’s quite nice at a glance and gets the fuck out of the way the moment it’s time to be a sysadmin.

      Unraid brings some good things to the table, I wouldn’t discount it completely.

      • Justin@lemmy.jlh.name
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        4 months ago

        I’m well aware.

        I was #8 on this list: https://web.archive.org/web/20240221094039/unraid.net/about

        The way that Unraid manages Docker containers is really dumb, and it gets in your way SO MUCH. Orphans are not a normal Docker idea, it is something invented by Unraid. It actively makes managing containers harder, as there is no documented way to restore orphans if I recall correctly. Creating new containers is confusing and uses non-standard terminology, when docker-compose files have been standard for half a decade now. Unraid is a really bad container orchestrator with bad abstractions and no ability to do Infrastructure as Code. The only good thing is the GUI for monitoring containers.

        The monitoring GUI is nice, and I guess if you’re doing everything with the CLI and just using the GUI for monitoring it makes sense. But CLI is not a supported workflow with Unraid, and what are you paying $3/month for if you’re just going to use the CLI? I personally wouldn’t recommend the overhead, setup, and upgrade headaches over just doing the CLI with Debian. There are just as nice free dashboards available for Kubernetes.

        For what it’s worth, this is my homelab: https://codeberg.org/jlh/h5b

        I run nearly 300 containers in a 4 node cluster, with a separate router and iot server. Every single piece is implemented in code, because that’s easier to maintain and document. I used Proxmox for VMs/LXC for a while, and I used FreeNAS for ZFS+NFS for a while, but now I use purely NixOS and Kubernetes. I have never seen Unraid as a valuable thing that I would like to add to my homelab in the past 8 years.

        • shnizmuffin@lemmy.inbutts.lol
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          4 months ago

          Orphans are just dangling objects, are they not?

          I’m only using the Unraid Docker GUI to send me utilization alerts and notify me when my images are egregiously out of date. I saw someone trying to author a compose file using the GUI once and I closed the window before the headache started.

          I’m not paying $3/mo. Where’d you get that idea? I think I paid $20 for a license like 6 years ago.

          I picked Unraid because I had a bunch of disparate HDDs sitting around and their filesystem intrigued me. (0 data loss after 3 drive failures so far.)

          • Justin@lemmy.jlh.name
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            edit-2
            4 months ago

            Fair enough. I think it’s bad to invent new words for “stopped container”, though. And there should be a way to re-start them.

            Yeah, the container creation GUI is a mess. The $3/month thing is a new thing they started for new customers this year. https://unraid.net/pricing

            Not a big deal for grandfathered users, but I think its important to consider as a new customer, as you won’t even get security updates without paying the subscription fee. Even for vulnerabilities like the CVE-2024-21626 Leaky Vessels vulnerability.

            The raid is nice, but it can be kinda clunky adding/removing drives sometimes and I’ve managed to accidentally destroy an array when I was playing with it. I think you can get identical features using LVM, but obviously it’s nice how Unraid does it all for you in a GUI.

            • keyez@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              4 months ago

              The cheapest option Is the monthly one for no security updates, there are still regular pro and higher plans which are one and done, no grandfathering

          • Justin@lemmy.jlh.name
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            4 months ago

            Yeah I’ve been wanting to start using it. I have a colleague who uses it for platform engineering and it’s supposed to be amazing. I was going to use it for creating offsite backup buckets on OVH but I ended up setting up a Hetzner storage box manually instead because that was cheaper. Since everything I have is self hosted, really the only external infrastructure I have is Cloudflare, but all the records there are handled by external-dns, so I haven’t really seen a need to GitOps it.

            One thing I do want to look at was the custom CRD feature they were talking about at Kubecon this year, it sounded like they might have finally fixed the platform engineering abstraction problem that people have been trying to use helm (badly) for. Many companies have actually been resorting to operators for this problem, which is super overkill. I did try to use cdk8s for abstraction last year, and I was even planning to create and support a production-ready Lemmy deployment option using cdk8s, but cdk8s honestly was quite clunky on the developer side and committed the sin of reimplementing an API without even properly documenting the new API.

            I’m probably just going to create a Lemmy Helm chart at some point using Cloud-Native Postgres operator and Gateway API when I have time. But Helm has glaring issues, both as a developer and as a user.

  • SidewaysHighways@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    4 months ago

    I’m no expert; only been dipping my toes in the selfhosted water for a few years.

    But my thought process would be all the main stuff on your main server and the redundant instances on a little backup

  • dan@upvote.au
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    4 months ago

    This is what I do:

    • Stuff that’s critical runs on VPSes running Debian stable. Things like my websites, email, authoritative DNS, etc. The VPS providers I use have nicer hardware than me (modern AMD EPYC servers, enterprise NVMe drives in RAID10 with warm spares, 40Gbps networking, etc)
    • Other stuff is on a home server running Unraid. It has a Core i5-13500 with a W680 motherboard, 2 x 2TB NVMe drives in ZFS mirror, 2 x 20TB Seagate Exos drives in ZFS mirror for data storage, and 1 x 14TB WD Purple Pro for security camera recordings.
    • I have a Raspberry Pi with a few things on it, like a second copy of my recursive DNS server, AdGuard Home (so the internet doesn’t break if I need to shut down “the main server”).

    I was thinking of running several servers at home, but right now I’m just running one main one. I don’t have much space and it’s running fine for me for now. Power is expensive here. I’ve got solar power, but I get 1:1 credits for excess solar power, so I’d rather save it for other things.

  • MNByChoice@midwest.social
    link
    fedilink
    English
    arrow-up
    5
    ·
    4 months ago

    “Easier” and “simpler” are in the eye of the beholder.

    A different way to approach it is to limit the failure domains. If this breaks how sad are you?

    I would separate storage from the rest. Networking stuff together may be fine. Home assistant depends on how dependent on it your household is.

    • SayCyberOnceMore@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 months ago

      This is the way.

      There’s nothing worse than finding your DNS/DHCP has gone down and it’s a VM / container running inside a server that can’t start because it doesn’t have an IP address and you can’t resolve names to get the thing started.

      Break things down into chunks that make sense - to you.

      I have dedicated (low power) hardware for the interweb firewall / DHCP / core network stuff.

      I have a NAS for storage with all the backups / reinstall images on (so I can rebuild the firewall if there’s no internet, for example)

      Then I have everything else in a single server.

      Sources: a house fire, water leak & many hardware failures & borked upgrades over many decades.

  • Ashley@lemmy.ca
    link
    fedilink
    arrow-up
    4
    ·
    4 months ago

    Services that can utilize the full power of a single machine are quite rare. I have about 15 docker containers in total taking up about 800mb of ram on one of my servers. In reality having multiple can be more complex and harder to maintain, not to mention power efficiency and cost.

  • Haui@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    4
    ·
    4 months ago

    Having as much on one machine as possible has efficiency and maintenance benefits since you have less machines to configure. The drawback is that multiple services can add up peak demands and run the machine oom which you can either solve by leaving extra headroom or make them redundant imo.

    Someone with more experience than me might have other ideas to add.

  • WhyJiffie@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    ·
    4 months ago

    It could be a good idea to move more critical things to a different machine. It’s often said that you shouldn’t run your router and/or firewall on your main server, but I think there are also security reasons for that.

    Or to move those to a low power consumption machine with cheaper hardwRe that are either more resource friendly, or very heavy but it’s fine if they can only finish their task over a longer time.

    Also, think about how could things go wrong. Have a second DNS and DHCP server (it’s difficult to run a secondary DHCP besides the primary, maybe you don’t need that), and some way you can reach the internet if the router or the firewall gets borked. That “way” does not need to be accessible at all times, but you should be able to switch it on when needed.
    Don’t forget to test that these are actually working after you have sweet them up.

    Whatever you decide on, don’t forget that you don’t have to do everything at once. Don’t let it overload you. Learning new tech takes time.

  • Bakkoda@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 months ago

    I split my setup into storage vs processing. Can one physical box handle both? If the answer is yes then go for it. If all your running is low IO stuff and it’s sipping cpu then one general purpose whitebox is a great start.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    4 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    DHCP Dynamic Host Configuration Protocol, automates assignment of IPs when connecting to a network
    DNS Domain Name Service/System
    Git Popular version control system, primarily for code
    IP Internet Protocol
    LVM (Linux) Logical Volume Manager for filesystem mapping
    LXC Linux Containers
    NAS Network-Attached Storage
    NFS Network File System, a Unix-based file-sharing protocol known for performance and efficiency
    NVMe Non-Volatile Memory Express interface for mass storage
    PiHole Network-wide ad-blocker (DNS sinkhole)
    VPS Virtual Private Server (opposed to shared hosting)
    ZFS Solaris/Linux filesystem focusing on data integrity
    k8s Kubernetes container management package

    [Thread #867 for this sub, first seen 12th Jul 2024, 23:05] [FAQ] [Full list] [Contact] [Source code]

    • lud@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 months ago

      Having everything on just a few VM hosts is so much easier, cheaper, and efficient. It’s eventually a bigger investment though. The days of bare-metal are long gone!

      • just_another_person@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 months ago

        Sorry, I think you’re misunderstanding what I’m saying. You can surely that, but if the host goes down, everything goes down. Single point of failure.