• 40 Posts
  • 84 Comments
Joined 1 year ago
cake
Cake day: June 10th, 2023

help-circle

  • The problem is that games don’t run at all or require major effort to run without issues.

    A major cause for that is the distro - when it comes to gaming, the distro makes a huge difference as I outlined previously. The second major cause is the flavor of Wine you chose (Proton-GE is the best, not sure what you used). The third major cause is checking whether or not the games are even compatible in the first place (via ProtonDB, Reddit etc) - you should do this BEFORE you recommend Linux to a gamer.

    In saying all that, I’ve no idea about pirated stuff though, you’re on your own on that one - Valve and the Wine developers obviously don’t test against pirated copies, and you won’t get much support from the community either.


  • Unfortunately you chose the wrong distro for your friend - Linux Mint isn’t good for gaming - it uses an outdated kernel/drivers/other packages, which means you’ll be missing out on all the performance improvements (and fixes) found in more up-to-date distros. Gaming on Linux is a very fast moving target, the landscape is changing at a rapid pace thanks to the development efforts of Valve and the community. So for gaming, you’d generally want to be on the latest kernel+mesa+wine stack.

    Also, as you’ve experienced, on Mint you’d have to manually install things like Waydroid and other gaming software, which can be a PITA for newbies.

    So instead, I’d highly recommend a gaming-oriented distro such as Nobara or Bazzite. Personally, I’m a big fan of Bazzite - it has everything you’d need for gaming out-of-the-box, and you can even get a console/Steam Deck-like experience, if you install the -deck variant. Also, because it’s an immutable distro with atomic updates, it has a very low chance of breaking, and in the rare ocassion that an update has some issues - you can just select the previous image from the boot menu. So this would be pretty ideal for someone who’s new to Linux, likes to game, and just wants stuff to work.

    In saying that, getting games to run in Linux can be tricky sometimes, depending on the game. The general rule of thumb is: try running the game using Proton-GE, and if that fails, check Proton DB for any fixes/tweaks needed for that game - with this, you would never again have to spend hours on troubleshooting, unless you’re playing some niche game that no one has tested before.



  • Bazzite. Here’s why:

    • Optimised for gaming (gaming optimised kernel, common tweaks pre-applied, all common gaming apps pre-installed like Steam, Mangohud etc)
    • All necessary drivers pre-installed (game controllers, RGB, and even proprietary nVidia)
    • A Steam-Deck like gaming experience, if you want (the Deck variant boots directly to Steam)
    • Immutable and atomic (image-based OS updates, so updates either work or don’t - there’s no chance of a broken state)
    • Easy rollbacks (just select the previous image in the GRUB menu)

    But since you said:

    how to squeeze the best performance out of this

    and if you’re really serious about squeezing the best performance, then check out the Arch-based CachyOS - unlike most other Linux distros, Cachy has optimised x86-64-v3 and v4 packages in their repos, which means apps can make use of advanced CPU instructions such as SSE3, AVX512 etc. Most other Linux distros on the other hand still use x86-64-v1 for compatibility reasons, which unfortunately means that you’d be missing out on all the cool new optimised CPU instructions introduced over the past 16 years.

    You can read more about microarchitecture levels (aka MARCH) here: https://en.wikipedia.org/wiki/X86-64#Microarchitecture_levels

    In addition to the MARCH, Cachy’s packages have other optimisations such as LTO/PGO, optimised kernel with the BORE and Rusty schedulers which are better for gaming, plus several performance-oriented tweaks which you’d otherwise have to do manually on Arch (such as makepkg.conf tweaks, pacman.conf tweaks etc).

    Finally, Cachy are always on the bleeding edge when it comes to gaming/driver/kernel/performance related stuff, so you’ll get all the good stuff even before Bazzite or other optimised distros. For instance, Cachy was the first distro to include the new nVidia driver which has explicit sync support for better Wayland compatibility, and they’re always on top of major Arch developments and provide detailed announcements which are relevant to gamers and performance freaks.

    Eg, here’s their recent recent nVidia announcement:

    Hi @here,

    as you maybe noticed, we have rolled out the new NVIDIA Driver, which includes the explicit sync protocol and tearing for Vulkan. We have been prioritized to move this forward to finally resolve the wayland situation. Additionally arch has pushed CUDA to 12.5, which is NOT compatible with the current 550 driver (it needs the 555 Driver).

    The beta driver is not perfect, but so far we are applying some fixes to avoid issues and restore performance problems with disabling the GSP Firmware load. This is handled via the “cachyos-settings” package.

    Anyways, since some people maybe have problems with this driver, here is a short instruction to manually downgrade and block the driver:

    […]

    If you are facing issues with the new NVIDIA Driver, reproduce the issues and then run “sudo nvidia-bugreport.sh” and report it to their forum: https://forums.developer.nvidia.com/c/gpu-graphics/linux/148

    We are also shipping now an precompiled nvidia-open module. This will be also as default installed for users, which have supported cards as soon NVIDIA releases the 560 drivers.

    The CachyOS Team

    So as you can see, they’re pretty on to it with this sorta stuff.

    Now the Bazzite team are also like the Cachy guys and keep up with this stuff, but because they’re based on Fedora, they can’t be as bleeding edge or as optimised as Arch. So it’s up to you - if you prefer stability, a primarily gaming-focused optimisations, and want something that “just works” then get Bazzite; or if you want an ultra-optimised distro to squeeze out the most performance out of your box but also don’t mind ocassionally diving into the terminal and getting your hands dirty, then get CachyOS.

    cc: @01189998819991197253@infosec.pub




  • IMO you shouldn’t look at it as “should I become an x user”, because that sort of implies you’re getting married to that distro. Instead, you should be asking, “should I use x to solve y?” For instance, I use RHEL, Debian (Raspbian), Fedora (Asahi), Fedora Atomic (Bazzite) and Arch. I also use Windows, macOS and FreeDOS. All solve different needs and problems. There’s no rule saying you should only stick to one distro/OS use whatever suits your needs, hardware and environment the best. :)




  • d3Xt3r@lemmy.nzMtoLinux@lemmy.mlSwitching from win 11
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    5 months ago
    1. I used OneDrive, and especially the file on-demand (all files on server visible in explorer but only downloaded when needed) feature a lot

    You can continue to use OneDrive. I use the OneDriver client and it works really well - your drive appears just like a local drive, but files only get downloaded when you try to access them. Once downloaded, it gets cached locally and is available offline, and is kept in sync automatically. Other cloud providers should have similar FUSE clients available.

    1. What are best practices for managing apps?

    Best practice is to stick to packages provided by your distro’s repos. Flatpak should be your second option if you can’t find your app there, and AppImages should be your third option (since Flatpaks are superior as they can share dependencies, unlike AppImages). Avoid Snap. In fact, avoid any distros that even use Snap (*buntu). Also, if you’re on a Debian/Ububtu based distro, avoid adding PPAs (thirdparty user repositories) as far as possible, as these can cause dependency issues and may cause pain when you upgrade your distro.

    Is there a GUI (I know) way to see all applications

    That should be provided by your distro - Gnome-based ones have “Software” and KDE-based ones have “Discover”.


  • Forget Linux for a second. What you need to be aware is that both the variants come with only 4GB soldered-on RAM and eMMC storage. That means, even if you do manage to get Linux going on them, it’s going to be super slow for any sort of practical Web/GUI needs. 4GB RAM is barely enough to run a browser these days, and if you tack on a full-fledged DE and multitasking with other apps, you’ll be pushing memory pages to the disk (ie, swapping). And when that happens, you’ll really feel the slowness. Trust me, you don’t want to be swapping to eMMC - that’s super old tech, something like 3x slower than UFS, which in turn a LOT slower than m.2 NVMe (the current standard used in “proper” laptops/convertibles).

    Also, consider this for perspective - even budget smartphones these days come with at least 6GB RAM and UFS storage. So this laptop/convertible - a device meant for productivity - is a complete ripoff.

    If money is an issue, then just buy a used laptop (from eBay, or whatever you guys use there). If you’re aiming for good Linux compatibility then ThinkPads are a safe bet. But since you’re after a Surface-like device, then you could just get any older Surface device. Why settle for an imitation when you can get the real thing? In any case, most older x86 laptops from mainstream brands should work fine in Linux in general, just do a google for it to see if there are any quirks or issues.

    Regardless of your choice, avoid the Duet 3. 4GB RAM is completely unacceptable for a laptop in 2024.







  • d3Xt3r@lemmy.nzMtoLinux@lemmy.mlThoughts on CachyOS?
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    6 months ago

    No need to hop around for the same thing.

    It’s not really the same thing. EndeavourOS is basically vanilla Arch + a few branding packages. CachyOS is an opionated Arch with optimised packages.

    You still have the option to select the DE and the packages you want to install - just like EndeavourOS - but what sets Cachy apart is the optimisations. For starters, they have multiple custom kernel options, with the BORE scheduler (and a few others), LTO options etc. Then they also have packages compiled for the x86-64-v3 and v4 architectures for better performance.

    Of course, you could also just use Arch (or EndeavourOS) and install the x86-64-v3/v4 packages yourself from ALHP (or even the Cachy repos), and you can even manually install the Cachy kernel or a similar optimised one like Xanmod. But you don’t get the custom configs / opinionated stuff. Which you many actually not want as a veteran user. But if you’re a newbie, then having those opinionated configs isn’t such a bad idea, especially if you decide to just get a WM instead of a DE.

    I’ve been thru all of the above scenarios, depending on the situation. My homelab is vanilla Arch but with packages from the Cachy repo. I’ve also got a pure Cachy install on my gaming desktop just because I was feeling lazy and just wanted an optimised install quickly. They also have a gaming meta package that installs Steam and all the necessary 32-bit libs and stuff, which is nice.

    Then there’s Cachy Browser, which is a fork of LibreWolf with performance optimisations (kinda similar to Mercury browser, except Mercury isn’t MARCH optimised).

    As for support, their Discord is pretty active, you can actually chat with the developers directly, and they’re pretty friendly (and this includes Piotr Gorski, the main dev, and firelzrd - the person behind the BORE scheduler). Chatting with them, I find the quality of technical discussions a LOT higher than the Arch Discord, which is very off-topic and spammy most of the time.

    Also, I liked their response to Arch changes and incidents. When Arch made the recent mkinitcpio changes, their made a very thorough announcement with the exact steps you needed to take (which was far more detailed than the official Arch announcement). Also, when the xz backdoor happened, they updated their repos to fix it even before Arch did.

    I’ve also interacted with the devs pesonally with various technical topics - such as CFLAG and MARCH optimisations, performance benchmarking etc, and it seems like they definitely know their stuff.

    So I’ve full confidence in their technical ability, and I’m happy to recommend the distro for folks interested in performance tuning.

    cc: @governorkeagan@lemdro.id


  • Others here have already given you some good overviews, so instead I’ll expand a bit more on the compilation part of your question.

    As you know, computers are digital devices - that means they work on a binary system, using 1s and 0s. But what does this actually mean?

    Logically, a 0 represents “off” and 1 means “on”. At the electronics level, 0s may be represented by a low voltage signal (typically between 0-0.5V) and 1s are represented by a high voltage signal (typically between 2.7-5V). Note that the actual voltage levels, or what is used to representation a bit, may vary depending on the system. For instance, traditional hard drives use magnetic regions on the surface of a platter to represent these 1s and 0s - if the region is magnetized with the north pole facing up, it represents a 1. If the south pole is facing up, it represents a 0. SSDs, which employ flash memory, uses cells which can trap electrons, where a charged state represents a 0 and discharged state represents a 1.

    Why is all this relevant you ask?

    Because at the heart of a computer, or any “digital” device - and what sets apart a digital device from any random electrical equipment - is transistors. They are tiny semiconductor components, that can amplify a signal, or act as a switch.

    A voltage or current applied to one pair of the transistor’s terminals controls the current through another pair of terminals. This resultant output represents a binary bit: it’s a “1” if current passes through, or a “0” if current doesn’t pass through. By connecting a few transistors together, you can form logic gates that can perform simple math like addition and multiplication. Connect a bunch of those and you can perform more/complex math. Connect thousands or more of those and you get a CPU. The first Intel CPU, the Intel 4004, consisted of 2,300 transistors. A modern CPU that you may find in your PC consists of hundreds of billions of transistors. Special CPUs used for machine learning etc may even contain trillions of transistors!

    Now to pass on information and commands to these digital systems, we need to convert our human numbers and language to binary (1s and 0s), because deep down that’s the language they understand. For instance, in the word “Hi”, “H”, in binary, using the ASCII system, is converted to 01001000 and the letter “i” would be 01101001. For programmers, working on binary would be quite tedious to work with, so we came up with a shortform - the hexadecimal system - to represent these binary bytes. So in hex, “Hi” would be represented as 48 69, and “Hi World” would be 48 69 20 57 6F 72 6C 64. This makes it a lot easier to work with, when we are debugging programs using a hex editor.

    Now suppose we have a program that prints “Hi World” to the screen, in the compiled machine language format, it may look like this (in a hex editor):

    As you can see, the middle column contains a bunch of hex numbers, which is basically a mix of instructions (“hey CPU, print this message”) and data (“Hi World”).

    Now although the hex code is easier for us humans to work with compared to binary, it’s still quite tedious - which is why we have programming languages, which allows us to write programs which we humans can easily understand.

    If we were to use Assembly language as an example - a language which is close to machine language - it would look like this:

         SECTION .data
    msg: db "Hi World",10
    len: equ $-msg
    
         SECTION .text
         
         global main   
    main:
         mov  edx,len
         mov  ecx,msg
         mov  ebx,1
         mov  eax,4
    
         int  0x80
         mov  ebx,0
         mov  eax,1
         int  0x80
    

    As you can see, the above code is still pretty hard to understand and tedious to work with. Which is why we’ve invented high-level programming languages, such as C, C++ etc.

    So if we rewrite this code in the C language, it would look like this:

    #include <stdio.h>
    int main() {
      printf ("Hi World\n");
      return 0;
    } 
    

    As you can see, that’s much more easier to understand than assembly, and takes less work to type! But now we have a problem - that is, our CPU cannot understand this code. So we’ll need to convert it into machine language - and this is what we call compiling.

    Using the previous assembly language example, we can compile our assembly code (in the file hello.asm), using the following (simplified) commands:

    $ nasm -f elf hello.asm
    $ gcc -o hello hello.o
    

    Compilation is actually is a multi-step process, and may involve multiple tools, depending on the language/compilers we use. In our example, we’re using the nasm assembler, which first parses and converts assembly instructions (in hello.asm) into machine code, handling symbolic names and generating an object file (hello.o) with binary code, memory addresses and other instructions. The linker (gcc) then merges the object files (if there are multiple files), resolves symbol references, and arranges the data and instructions, according to the Linux ELF format. This results in a single binary executable (hello) that contains all necessary binary code and metadata for execution on Linux.

    If you understand assembly language, you can see how our instructions get converted, using a hex viewer:

    So when you run this executable using ./hello, the instructions and data, in the form of machine code, will be passed on to the CPU by the operating system, which will then execute it and eventually print Hi World to the screen.

    Now naturally, users don’t want to do this tedious compilation process themselves, also, some programmers/companies may not want to reveal their code - so most users never look at the code, and just use the binary programs directly.

    In the Linux/opensource world, we have the concept of FOSS (free software), which encourages sharing of source code, so that programmers all around the world can benefit from each other, build upon, and improve the code - which is how Linux grew to where it is today, thanks to the sharing and collaboration of code by thousands of developers across the world. Which is why most programs for Linux are available to download in both binary as well as source code formats (with the source code typically available on a git repository like github, or as a single compressed archive (.tar.gz)).

    But when a particular program isn’t available in a binary format, you’ll need to compile it from the source code. Doing this is a pretty common practice for projects that are still in-development - say you want to run the latest Mesa graphics driver, which may contain bug fixes or some performance improvements that you’re interested in - you would then download the source code and compile it yourself.

    Another scenario is maybe you might want a program to be optimised specifically for your CPU for the best performance - in which case, you would compile the code yourself, instead of using a generic binary provided by the programmer. And some Linux distributions, such as CachyOS, provide multiple versions of such pre-optimized binaries, so that you don’t need to compile it yourself. So if you’re interested in performance, look into the topic of CPU microarchitectures and CFLAGS.

    Sources for examples above: http://timelessname.com/elfbin/


  • This shouldn’t even be a question lol. Even if you aren’t worried about theft, encryption has a nice bonus: you don’t have to worry about secure erasing your drives when you want to get rid of them. I mean, sure it’s not that big of a deal to wipe a drive, but sometimes you’re unable to do so - for instance, the drive could fail and you may not be able to do the wipe. So you end up getting rid of the drive as-is, but an opportunist could get a hold of that drive and attempt to repair it and recover your data. Or maybe the drive fails, but it’s still under warranty and you want to RMA it - with encryption on, you don’t have to worry about some random accessing your data.





  • It’s just a check on the version number. As per my previous link, Void’s FreeRDP is still stuck on 2.x, whereas 3.x stable came out last December, with the latest stable being v3.4.0, released 3 weeks ago.

    Nix also fails this test btw, since they too are still stuck on 2.x - and this is an example I’ve been using often as an argument against Nix fanbois who tend to claim that Nixpkgs is equivalent or even superior to the AUR, when in reality that’s not the case.

    The reason why I’m so interested in 3.x is because it’s a major upgrade with a ton of QoL improvements. Any serious RDP user will want to switch to FreeRDP 3.x, especially if they’re a Wayland user / game over RDP /use RemoteApps (eg WinApps) etc. So I check the FreeRDP version of a distro as an indicator whether that distro is worth my time or not, hence why I call it the “freerdp test”. 3.x is also consider stable release btw, so there’s really no excuse for a distro not to package it and at least make it available - perhaps with a new name so as to not force an upgrade, if they’re concerned about compatibility issues.