First, they restricted code search without logging in so I’m using sourcegraph But now, I cant even view discussions or wiki without logging in.
It was a nice run
The final strawberry for me was forcing people to have 2fa.
Why? That’s a good thing.
You don’t need the question mark. If something is for-profit (or can be used for profit) then sooner or later it will be enshittified.
They have teams of people whose entire job is figuring out ways to wring a few more cents from somebody. Put them at the helm of a company that’s stood for 1000 years and they’ll be thrilled at how easy it will be to use that name to sell plastic dogshit at a premium price.
No. I am able to decide for myself, whether or not I need 2FA. A code via E-Mail is enough for me. If you feel like you need 2FA; feel free to enable it for yourself…
Not sure how a company can turn a public digital key or a mathematically calculated number (both of them completely unlinked to your real identity in any way) to profit. But you do you I guess.
Well, I never said that. It just generally shows the direction, they are heading. They are literally FORCING you to enable that. I am not a baby. I don’t need a babysitter.
A code via E-Mail is enough for me.
Which basically is another type of 2FA…
At least this one isn’t utter bullshit
The writing was on the wall when they established a generative AI using everyone’s code and of course without asking anyone for permission.
It’s an interesting debate isn’t it? Does AI transform something free into something that’s not? Or does it simply study the code?
There’s no debate. LLMs are plagiarism with extra steps. They take data (usually illegally) wholesale and then launder it.
A lot of people have been doing research into the ethics of these systems and that’s more or less what they found. The reason why they’re black boxes is precisely the reason we all suspected; they were made that way because if they weren’t we’d all see them for what they are.
Can you link it please? I’d like to inform myself.
I doubt they have a factual basis for their opinion, considering
they were made that way because if they weren’t we’d all see them for what they are.
Is just plain wrong. Researchers would love to have a non black box AI (i.e. a white box AI), but it’s unfortunately impossible with the current architecture.