This isn’t Linux, but Linux-like. Its a microkernel built from the rust programming language. Its still experimental, but I think it has great potential. It has a GUI desktop, but the compiler isn’t quite fully working yet.

Has anyone used this before? What was your experience with it?

Note: If this is inappropriate since this isn’t technically Linux, mods please take down.

  • wiki_me@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    Having some hardware mentioned on the site that is supported and ready for use could be helpful if someone wants to try it (say raspberry pi), There are probably people who are worried to will make their computer explode.

  • aodhsishaj@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    11 months ago

    I wouldn’t say it’s inappropriate as there is more and more rust making it into the native kernel. I’ll definitely throw this on my Ventoy usb and see if I can get it to boot

    • Pantherina@feddit.de
      link
      fedilink
      arrow-up
      0
      ·
      11 months ago

      It is not Linux, but there is no other good Community I guess.

      Redox even works on some hardware! Its made pretty much from scratch, and microkernel means you actually need Drivers afaik

  • jack@monero.town
    link
    fedilink
    arrow-up
    0
    ·
    11 months ago

    Now imagine the new COSMIC desktop environment in Rust on Redox, that would be great

        • AggressivelyPassive@feddit.de
          link
          fedilink
          arrow-up
          0
          ·
          11 months ago

          That would complicate things even more.

          Rust has pretty sophisticated guarantees in terms of memory safety. If you’d add the step of another compiler, you’d have to guarantee that a) the transpiler still produces memory safe C and that a given C compiler actually turns that C code into memory safe assembler.

          BTW: you don’t have to rewrite everything immediately, you can integrate rust into existing C and vice versa. Apparently it’s not trivial, but possible. See https://wiki.mozilla.org/Oxidation

        • callyral [he/they]@pawb.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          11 months ago

          (notice: I am not a Rust or C/C++ expert)

          Doing all that is creating a completely separate programming language from C. Rust is that programming language.

          Fix shitty imports

          Rust does that with modules and crates.

          Improve syntax rule

          You mean having consistent/universal style guidelines? Rust pretty much has that with rustfmt.

          Improve memory management

          Safe Rust is memory safe (using things like the borrow checker), and Unsafe Rust is (usually?) separated using the unsafe keyword.

          Although Unsafe Rust seems to be quite a mess, idk haven’t tried it

          Other new misc features

          Rust has macros, iterators, lambdas, etc. C doesn’t have those. C++ probably has those but in a really weird C++ way.

            • Spore@lemmy.ml
              link
              fedilink
              arrow-up
              0
              ·
              11 months ago

              I’d say no. Programming safely requires non-trivial transformation in code and a radical change in style, which afaik cannot be easily done automated.

              Do you think that there’s any chance to convert from this to this? It requires understanding of the algorithm and a thorough rewrite. Automated tools can only generate the former one because it must not change C’s crooked semantics.

    • weclaw@lemm.ee
      cake
      link
      fedilink
      arrow-up
      0
      ·
      11 months ago

      From my personal experience I can tell you 2 reasons. The first is that this is the first general purpose language that can be used for all projects. You can use it on the web browser with web assembly, it is good for backend and it also is low level enough to use it for OS development and embedded. Other languages are good only for some thing and really bad for others. The second reason is that it is designed around catching errors at compile time. The error handling and strict typing forces the developer to handle errors. I have to spend more time creating the program but considerably less time finding and fixing bugs.

      • AggressivelyPassive@feddit.de
        link
        fedilink
        arrow-up
        0
        ·
        11 months ago

        As much as I want to love Rust, that’s not entirely true.

        Writing a web API in Rust is a pain. It requires way too much boilerplate for very low level concepts. For example having to deal with all the lifetime crap in a simple CRUD endpoint. I understand why that’s necessary, but compared to Python or Java it’s just a very large mental load overhead.

        • Schmeckinger@feddit.de
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          11 months ago

          You need less and less lifetimes as time goes on. The compiler gets better at inferring them and you could always use the heap if you wanted to or if what you are doing isn’t very low level.

    • MonkCanatella@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      11 months ago

      I know the evangelists can be somewhat overwhelming, but its popularity is not unwarranted. It’s fairly easy to pick up, has an incredibly enthusiastic and welcoming community. People like it because it’s incredibly performant, and its memory safe. In terms of DX it’s really a joy to work with. It just has a LOT going for it, and the main drawback you’ll hear about (difficulty) is really overblown and most devs can pick it up in a matter of months.

      • Ramin Honary@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        11 months ago

        The main difficulty I have with Rust (what prevents me from using it), is that the maintainers insist on statically compiling everything. This is fine for small programs, and even large monolithic applications that are not expected to change very often.

        But for the machine learning projects I work on, I might want to include a single algorithm from a fairly large library of algorithms. The amount of memory used is not trivial, I am talking about the difference between loading a single algorithm in 50 MB of compiled code for a dynamically loadable library, versus loading the entire 1.5 GB library of algorithms of statically linked code just to use that one algorithm. Then when distributing this code to a few dozen compute nodes, that 50 MB versus 1.5 GB is suddenly a very noticeable difference.

        There are other problems with statically linking everything as well, for example, if you want your application to be written in a high-level language like Python, TypeScript, or Lisp, you might want to have a library of Rust code that you can dynamically load into the Python interpreter and establish foreign function bindings to the Rust APIs. But this is not possible with statically linked code.

        And as I understand, it is a difficult technical problem to solve. Apparently, in order for Rust to optimize a program and guarantee type safety and performance, it needs the type information in the source code. This type information is not normally stored into the dynamically loadable libraries (the .so or .dll files), so if you dynamically load a library into a Rust program its type safety and performance guarantees go out the window. So the Rust compiler developers have chosen to make everything as statically compiled as possible.

        This is why I don’t see Rust replacing C any time soon. A language like Zig might have a better chance than Rust because it can produce dynamically loadable libraries that are fully ABI compatible with the libraries compiled by C compilers.

        • naptera@feddit.de
          link
          fedilink
          arrow-up
          0
          ·
          11 months ago

          Just asking as I don’t have that much knowledge about static and dynamic linking: When you link statically my understanding was that the compiler directly integrates the implementations of the directly or indirectly used functions and other symbols into the resulting binary but ignores everything else. Wouldn’t that mean that it is either smaller over all or at least as small as a dynamic library + executable? Because the dynamic library obviously has to contain every implementation as it doesn’t know about the executables of the system.

          So the only way where using static linking results in overall bigger sizes than dynamic linking would be many (at least 2) executables using the same library. And you even said that you only use one algorithm from a big library so static should be way smaller than dynamic even with many executables.

          When you meant memory usage then I thought that dynamic libraries would have to be completely preloaded because you can’t know which function will be needed in the near future and just in time loading would be way too slow while static binaries will only load what will be needed as I understand it.

  • Pantherina@feddit.de
    link
    fedilink
    arrow-up
    0
    ·
    11 months ago

    This is VERY important for the future of Linux.

    If you dive into it, Linux security is a total mess. You have SELinux, userspace and all that, permission systems and mandatory access control.

    And then you have the Kernel, which is (to roughly quote Daniel Micay from some 5yo Reddit comment) “like you imagine systemd, but way worse and completely established”. It is a huge set of software written in unsafe C, with complete access over the entire system, no matter if its just some ancient driver, some weird unused filesystem support or whatnot.

    The kernel is huge bloat, and even if you dont want to accept it, a big reason is Distros not getting their shit together and working on the same thing. If drivers cant be implemented in userspace, as every distro does that differently and things break, for the sake of unifying everything it gets baked into the Kernel.

    “Kernel hardening”, as far as I understand it, is mostly just restricting those unneeded features, making it log less critical info, blocking some external manipulation…

    But the essence really is that the Linux Kernel isnt something everyone should use. There should be modules for the hardware components, external drivers that are installed along.

    I guess Gentoo is right here, but its very inconvenient to use. But having your own custom Kernel, only containing modules you need, would be a start. In the End though seperate drivers are necessary.