I see the raise of popularity of Linux laptops so the hardware compatibility is ready out of the box. However I wonder how would I build PC right know that has budget - high end specification. For now I’m thinking

  • Case: does not matter
  • Fans: does not matter
  • PSU: does not matter
  • RAM: does not matter I guess?
  • Disks: does not matter I guess?
  • CPU: AMD / Intel - does not matter but I would prefer AMD
  • GPU: AMD / Intel / Nvidia - for gaming and Wayland - AMD, for AI, ML, CUDA and other first supported technologies - Nvidia.

And now the most confusing part for me - motherboard… Is there even some coreboot or libreboot motherboard for PC that supports “high end” hardware?

Let’s just say also a purpose of this Linux PC. Choose any of these

  1. Blender 3D Animation rendering
  2. Gaming
  3. Local LLM running

If you have some good resources on this also let me know.

  • jrgd@lemm.ee
    link
    fedilink
    English
    arrow-up
    26
    ·
    1 month ago

    A key list of compatible/incompatible components to look for:

    • GPU
    • Network Interfaces (Ethernet and Wi-Fi)
    • Audio Interfaces (not that much of an issue anymore)
    • Disks
    • Motherboards
    • CPU (excluding x86 ecosystem)
    • Peripherals

    The explanations for this are pretty long, but are meant to be fairly exhaustive in order to catch most if any pitfalls one could possibly encounter.

    GPU:

    A big one is the choice between AMD, Intel, and NVidia. I am going to leave out Intel for compute as I know little about the state it is in. For desktop and gaming usage, go with AMD or Intel. NVidia is better than it used to be, but still lags behind in proper Wayland support and the lack of in-tree kernel drivers still makes it more cumbersome to install and update on many distros whereas using an AMD or Intel GPU is fairly effortless.

    For compute, NVidia is still the optimal choice for Blender, Resolve, and LLM. Though that isn’t to say that modern AMD cards don’t work with these tasks. For Blender and Davinci Resolve, you can get them to use RDNA+ AMD cards through ROCm + HIP, without requiring the proprietary AMD drivers. For resolve especially, there is some serious setup involved, but is made easier through this flatpak for resolve and this flatpak for rocm runtime. ML tasks depend on the software used. For instance, Pytorch has alternate versions that can make use of ROCm instead of CUDA. Tools depending on Pytorch will often have you change the Pytorch source or you may have to manually patch in the ROCm Pytorch for the tool to work correctly on an AMD card.

    Additionally, I don’t have performance benchmarks, but I would have to guess all of these tasks aren’t as performant if compared to closely equivalent NVidia hardware currently.

    Network Interfaces:

    One section of hardware I don’t see brought up much is NICs (including the ones on the motherboard). Not all NICs play as nicely as others. Typically I will recommend getting Ethernet and Wireless network interfaces from Intel and Qualcomm over others like Realtek, Broadcom, Ralink/Mediatek. Many Realtek and Mediatek NICs are hit-or-miss and a majority of Broadcom NICs I have seen are just garbage. I have not tested AMD+Mediatek’s collaboration Wi-Fi cards so I can’t say how well they work.

    Bluetooth also generally sits into this category as well. Bluetooth provided by a reputable PCIe/M.2 wireless card is often much more reliable than most of the Realtek, Broadcom, Mediatek USB dongles.

    Audio Interfaces:

    This one isn’t as much of a problem as it used to be. For a lot of cards that worked but had many quirks using PulseAudio (a wide variety of Realtek on-board chipsets mainly), they tend to work just fine with Pipewire. For external audio interfaces: if it is compliant to spec, it likely works just fine. Avoid those that require proprietary drivers to function.

    Disks:

    Hard drives and SSDs are mostly fine. I would personally avoid general cheap-quality SSDs and those manufactured by Samsung. A lot of various SATA drives have various issues, though I haven’t seen many new products from reputable companies actually releasing with broken behavior as documented by the kernel. If you wish to take a detailed look of devices the kernel has restricted broken functionality on, here is the list.

    Additionally, drives may be one component beside the motherboard where you might actually see firmware updates for the product. Many vendors only release EXE files for Windows to update device firmware, but many nicer vendors actually publish to the LVFS. You can search if a vendor/device is supplied firmware here.

    Motherboards:

    In particular, motherboards are included mainly because they have audio chipsets and network interfaces soldered and/or socketed to them. Like disks, motherboards may or may not have firmware updates available in LVFS. However, most motherboard manufacturers allow for updating the BIOS via USB stick. Some laptops I have seen only publish EXE files to do so. For most desktop boards however, one should be able to always update the motherboard BIOS fine from a Linux PC.

    Some motherboards have quirky Secure Boot behavior that denies them being able to work on a Linux machine. Additionally some boards (mostly on laptops again) have either broken or adjustable power state modes. Those with adjustable allow for switching between Windows and standard-compliant modes.

    Besides getting a Framework laptop ‘Chromebook edition’, I don’t think there is much you will find for modern boards supporting coreboot or libreboot.

    CPUs:

    For your use case, this doesn’t really matter. Pretty much every modern x86 CPU will work fine on Linux. One only has to hunt for device support if you are running on ARM or RiscV. Not every kernel supports every ARM or RiscV CPU or SoC.

    Peripherals:

    Obviously one of the biggest factors for many new users switching to Linux is their existing peripherals that require proprietary software on Windows missing functionality or not working on Linux. Some peripherals have been reverse engineered to work on Linux (see Piper, ckb-next, OpenRazer, StreamController, OpenRGB).

    Some peripherals like printers may just not work on Linux or may even work better than they ever did on Windows. For problematic printers, there is a helpful megalist on ArchWiki.

    For any other peripherals, it’s best to just do a quick search to see if anyone else has used it and if problems have occurred.

    • Telorand@reddthat.com
      link
      fedilink
      arrow-up
      5
      ·
      1 month ago

      I am going to leave out Intel for compute as I know little about the state it is in.

      I forget which community it was posted in, but iirc, Intel just lost a bunch of their Linux devs (Fired? Quit? I forget). Arc had some dedicated dev time put towards it, but unless something has changed, it’s likely still a hanging question as to what the future of Arc driver updates will be on Linux.

      So you are probably safe to recommend people avoid Intel GPUs for now.

      • jrgd@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 month ago

        I am under the presumption that the current state of the Intel Arc Alchemist GPUs will likely remain about the same under Mesa even if support is dropped today by Intel. Am I mistaken in the amount of continued driver effort Intel has been putting in for the Mesa GPU drivers?

        Obviously if this is true, one should probably remain wary of upcoming Battlemage GPUs.

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      I would be surprised if any modern network interface didn’t work well with Linux. The only problems I have had is on hardware over 7 years of age. Also SSDs are all pretty good. As long as it has a good warranty you are fine. Just avoid the cheap Chinese brands.

      You can totally get a random machine and then install Linux. It just works these days.

    • Heavybell@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      I think the audio interface thing needs a big asterisk; IF you are only interested in stereo, then it’s not much of an issue. But getting 5.1 to work has been a huge hassle for me.

      • jrgd@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        1 month ago

        What hardware, audio interface, and sound server is in use for your 5.1 Surround setup?

        • Heavybell@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 month ago

          Using pipewire, and I’ve tried both the SB X4 USB DAC, and a SBX AE-5 PCIe card. Obviously being Creative products that’s the cause of my issues, but I have found it very very hard to find alternatives. Every recommended option just supports stereo, it seems.

  • Kidplayer_666@lemm.ee
    link
    fedilink
    arrow-up
    18
    ·
    1 month ago

    While PSU doesn’t matter for Linux compatibility, please, please buy a good one from a reputed brand. If you’re going high end, get at least an 80 plus gold PSU

  • brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    16
    ·
    edit-2
    1 month ago

    Basically the only thing that matters for LLM hosting is VRAM capacity. Hence AMD GPUs can be OK for LLM running, especially if a used 3090/P40 isn’t an option for you. It works fine, and the 7900/6700 are like the only sanely priced 24GB/16GB cards out there.

    I have a 3090, and it’s still a giant pain with wayland, so much that I use my AMD IGP for display output and Nvidia still somehow breaks things. Hence I just do all my gaming in Windows TBH.

    CPU doesn’t matter for llm running, cheap out with a 12600K, 5600, 5700x3d or whatever. And the single-ccd x3d chips are still king for gaming AFAIK.

    • GenderNeutralBro@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 month ago

      Basically the only thing that matters for LLM hosting is VRAM capacity

      I’ll also add that some frameworks and backends still require CUDA. This is improving but before you go and buy an AMD card, make sure the things you want to run will actually run on it.

      For example, bitsandbytes support for non-CUDA backends is still in alpha stage. https://huggingface.co/docs/bitsandbytes/main/en/installation#multi-backend

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        1 month ago

        For local LLM hosting, basically you want exllama, llama.cpp (and derivatives) and vllm, and rocm support for all of them is just fine. It’s absolutely worth having a 24GB AMD card over a 16GB Nvidia one, if that’s the choice.

        The big sticking point I’m not sure about is flash attention for exllama/vllm, but I believe the triton branch of flash attention works fine with AMD GPUs now.

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      This only matters if you are running large models. If you stick with Mistral sized models you don’t need nearly as much hardware.

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 month ago

        These days, there are amazing “middle sized” models like Qwen 14B, InternLM 20B and Mistral/Codestral 22B that are such a massive step over 7B-9B ones you can kinda run on CPU. And there are even 7Bs that support a really long context now.

        IMO its worth reaching for >6GB of VRAM if LLM running is a consideration at all.

    • Psyhackological@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      1 month ago

      VRAM and RAM I think. Still AMD seems always slower than Nvidia for some reason for this purpose. Same for Blender benchmarks.

      Ah I use my AMD GPU with Bazzite and it is wonderful.

      CPU does not matter when GPU matters. Otherwise small models will do fine on CPU especially with more recent instructions for running LLMs.

      • GenderNeutralBro@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 month ago

        Yeah, AMD is lagging behind Nvidia in machine learning performance by like a full generation, maybe more. Similar with raytracing.

        If you want absolute top-tier performance, then the RTX 4090 is the best consumer card out there, period. Considering the price and power consumption, this is not surprising. It’s hardly fair to compare AMD’s top-end to Nvidia’s top-end when Nvidia’s is over twice the price in the real world.

        If your budget for a GPU is <$1600, the 7900 XTX is probably your best bet if you don’t absolutely need CUDA. Any performance advantage Nvidia has goes right out the window if you can’t fit your whole model in VRAM. I’d take a 24GB AMD card over a 16GB Nvidia card any day.

        You could also look at an RTX 3090 (which also has 24GB), but then you’d take a big hit to gaming/raster performance and it’d still probably cost you more than a 7900XTX. Not really sure how a 3090 compares to a 7900XTX in Blender. Anyway, that’s probably a more fair comparison if you care about VRAM and price.

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        1 month ago

        I am not a fan of CPU offloading because I like long context, 32K+. And that absolutely chugs if you even offload a layer or two.

  • Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 month ago

    That website is not really useful as the information is wrong or out of data

    Petty much all modern hardware works with Linux. There are a few exceptions but that’s very rare.

  • grue@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 month ago

    Is there even some coreboot or libreboot motherboard for PC that supports “high end” hardware?

    As far as I know, the highest-end motherboard that supports Libreboot is an Opteron – not Epyc, Opteron – dual-socket server board from about a decade ago.

  • Nanook@friendica.eskimo.com
    link
    fedilink
    arrow-up
    4
    ·
    1 month ago

    PSU definitely matters, I went initially with a ThermalTake, it failed after a couple of months, then a Gigabyte, same thing, now I’m running a Seasonic and finally appear to be stable.

      • Nanook@friendica.eskimo.com
        link
        fedilink
        arrow-up
        2
        ·
        1 month ago

        @possiblylinux127 It was a very high powered CPU, i9-10980xe, overclocked for all I could get out of it. At max load, it drew around 540 watts. Supplies were rated at 1kw but both short lived, the Seasonic I replaced them with is 1200 watts, also even cables are better quality, previous supplies cables were 16 guage but those that came with Seasonic, 12 guage.

  • bloodfart@lemmy.ml
    link
    fedilink
    arrow-up
    4
    arrow-down
    4
    ·
    1 month ago

    You need to use intel/nvidia.

    You might be able to get away with amd instead of intel, but nvenc and cuda support is a non negotiable thing for your use case.

    You will not encounter any problems as long as you don’t run Wayland.

    Any motherboard is fine. You don’t need coreboot support to run Linux.

    • Matty_r@programming.dev
      link
      fedilink
      arrow-up
      8
      ·
      1 month ago

      Just a point on Wayland - I have an nvidia GPU and have been on Wayland for a couple months now (KDE Plasma), and its been entirely problem free and I actually forgot I switched from X11 to Wayland.

      Blender has support for Wayland now too.

      I do a lot of gaming and development - ever since Nvidia made those changes for Wayland support and KDE added that explicit sync stuff its been great. Before all of that though I had heaps of issues with flickering and just general usability.

      Wayland actually fixed a number of issues for me, like stuttering when notifications appear, and jankyness in resizing windows.

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 month ago

      You can absolutely use an AMD card for LLMs. You can even use the CPU if you don’t mind it being slower.

      If this person is a AI researcher doing lots of LLM work it might be different but somehow I think they are just a casual user that asks questions

      • bloodfart@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        1 month ago

        Both blender and every llm library I’m aware of work better and have broader support with nvidia hardware.

        That’s two out of three of the ops use cases.

        Gaming, the third use case, is perfectly fine using an nvidia card.

        There’s nothing wrong with amd video cards, but for this user, in this case, they’re not the choice I would recommend.

        Especially if they’re just a normal person who asks questions because it’s much, much more likely that someone who uses blender or llms will be able to answer their questions and address any issues related to hardware because people using blender and llm are broadly using nvidia cards.

        • Possibly linux@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          The problem is that Nvidia cards also such under Linux. Sure it may work in some configurations but with a Intel or AMD GPU it works without fiddling around. As long as you have a new enough kernel it is a good experience.

          • bloodfart@lemmy.ml
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            1 month ago

            I don’t think that’s relevant.

            To employ a car metaphor, I own a small Japanese sedan. I’ve installed an aftermarket tow hitch and have used it to haul small trailers. I have a pair of toolboxes in the trunk and I live up a road that after recent events would be considered a technical driving course. I’m able to get home just fine in my small, low clearance car with a four cylinder engine and touring tires.

            If a person asked me: “what vehicle should I get for towing, working in trades and off roading on the weekend?”, I’d absolutely never suggest a Honda accord.

            While the experience of owning a diesel truck is more complex and requires some fiddling around, for example, remembering to use the green pump, understanding when to use the fuel cutoff switch, using a block heater when it’s cold outside, saving up more money for repairs and generally actually operating the vehicle differently under almost any comparable conditions, it’s the right tool for the job at hand and dealing with those differences is part and parcel not just of handling the tool, but completing the job.

      • bloodfart@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        1 month ago

        I don’t know of any msi or asus boards with problems. Of course, I rejected coreboot as a requirement so that plays into it.

        My personal experience is: don’t overclock and everything will run fine for at least ten years.

        Blender works faster with nvidia and it’s been the optimal hardware for maybe two decades now. There’s just so much support and knowledge out there for getting every feature in the tool working with it that I couldn’t in good faith recommend a person use amd cards to have a slightly nicer Wayland experience or a little better deal.

        If you’re only doing llm text work then a case could be made for a non cuda (non-nvidia) accelerator. Of course at that point you’d be better served by one of those coral doodads.

        Were you only doing text based ml work or was there image recognition/diffusion/whatever in there too?