First, they restricted code search without logging in so I’m using sourcegraph But now, I cant even view discussions or wiki without logging in.

It was a nice run

      • venji10@feddit.de
        link
        fedilink
        arrow-up
        0
        ·
        8 months ago

        You don’t need the question mark. If something is for-profit (or can be used for profit) then sooner or later it will be enshittified.

        They have teams of people whose entire job is figuring out ways to wring a few more cents from somebody. Put them at the helm of a company that’s stood for 1000 years and they’ll be thrilled at how easy it will be to use that name to sell plastic dogshit at a premium price.

        No. I am able to decide for myself, whether or not I need 2FA. A code via E-Mail is enough for me. If you feel like you need 2FA; feel free to enable it for yourself…

        • Asudox@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          8 months ago

          Not sure how a company can turn a public digital key or a mathematically calculated number (both of them completely unlinked to your real identity in any way) to profit. But you do you I guess.

          • venji10@feddit.de
            link
            fedilink
            arrow-up
            0
            ·
            8 months ago

            Well, I never said that. It just generally shows the direction, they are heading. They are literally FORCING you to enable that. I am not a baby. I don’t need a babysitter.

  • Omega_Haxors@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    The writing was on the wall when they established a generative AI using everyone’s code and of course without asking anyone for permission.

    • Elise@beehaw.org
      link
      fedilink
      arrow-up
      0
      ·
      8 months ago

      It’s an interesting debate isn’t it? Does AI transform something free into something that’s not? Or does it simply study the code?

      • Omega_Haxors@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        8 months ago

        There’s no debate. LLMs are plagiarism with extra steps. They take data (usually illegally) wholesale and then launder it.

        A lot of people have been doing research into the ethics of these systems and that’s more or less what they found. The reason why they’re black boxes is precisely the reason we all suspected; they were made that way because if they weren’t we’d all see them for what they are.

          • Turun@feddit.de
            link
            fedilink
            arrow-up
            0
            ·
            8 months ago

            I doubt they have a factual basis for their opinion, considering

            they were made that way because if they weren’t we’d all see them for what they are.

            Is just plain wrong. Researchers would love to have a non black box AI (i.e. a white box AI), but it’s unfortunately impossible with the current architecture.