You know guys, I’m starting to think what we heard about Altman when he was removed a while ago might actually have been real.
/s
I wonder if all those people who supported him like the taste of their feet.
like the taste of their feet.
And it’s kinda funny that they are now the ones being removed
What was the behind the scenes deal on this? I remember it happening but not the details
There’s an alternate timeline where the non-profit side of the company won, Altman the Conman was booted and exposed, and OpenAI kept developing machine learning in a way that actually benefits actual use cases.
Cancer screenings approved by a doctor could be accurate enough to save so many lives and so much suffering through early detection.
Instead, Altman turned a promising technology into a meme stock with a product released too early to ever fix properly.
No, there isn’t really any such alternate timeline. Good honest causes are not profitable enough to survive against the startup scams. Even if the non-profit side won internally, OpenAI would just be left behind, funding would go to its competitors, and OpenAI would shut down. Unless you mean a radically different alternate timeline where our economic system is fundamentally different.
I mean wikipedia managed to do it. It just requires honest people to retain control long enough. I think it was allowed to happen in wikipedia’s case because the wealthiest/greediest people hadn’t caught on to the potential yet.
There’s probably an alternate timeline where wikipedia is a social network with paid verification by corporate interests who write articles about their own companies and state-funded accounts spreading conspiracy theories.
There are infinite timelines, so, it has to exist some(wehere/when/[insert w word for additional dimension]).
What is OpenAI doing with cancer screening?
AI models can outmatch most oncologists and radiologists in recognition of early tumor stages in MRI and CT scans.
Further developing this strength could lead to earlier diagnosis with less-invasive methods saving not only countless live and prolonging the remaining quality life time for the individual but also save a shit ton of money.That is a different kind of machine learning model, though.
You can’t just plug in your pathology images into their multimodal generative models, and expect it to pop out something usable.
And those image recognition models aren’t something OpenAI is currently working on, iirc.
I’m fully aware that those are different machine learning models but instead of focussing on LLMs with only limited use for mankind, advancing on Image Recognition models would have been much better.
I agree but I also like to point out that the AI craze started with LLMs and those MLs have been around before OpenAI.
So if openAI never released chat GPT, it wouldn’t have become synonymous with crypto in terms of false promises.
Not only that, image analysis and statistical guesses have always been around and do not need ML to work. It’s just one more tool in the toolbox.
Fun thing is, most of the things AI can, they never planned it to be able to do it. All they tried to achieve was auto completion tool.
Don’t know about image recognition but they released DALL-E , which is image generating and in painting model.
Wasn’t it proven that AI was having amazing results, because it noticed the cancer screens had doctors signature at the bottom? Or did they make another run with signatures hidden?
There were more than one system proven to “cheat” through biased training materials. One model used to tell duck and chicken apart because it was trained with pictures of ducks in the water and chicken on a sandy ground, if I remember correctly.
Since multiple medical image recognition systems are in development, I can’t imagine they’re allthis faultytrained with unsuitable materials.They are not ‘faulty’, they have been fed wrong training data.
This is the most important aspect of any AI - it’s only as good as the training dataset is. If you don’t know the dataset, you know nothing about the AI.
That’s why every claim of ‘super efficient AI’ need to be investigated deeper. But that goes against line-goes-up principle. So don’t expect that to happen a lot.
Or we get to a time where we send a reprogrammed terminator back in time to kill altman 🤓
Putting my tin foil hat on… Sam Altman knows the AI train might be slowing down soon.
The OpenAI brand is the most valuable part of the company right now, since the models from Google, Anthropic, etc. can beat or match what ChatGPT is, but they aren’t taking off coz they aren’t as cool as OpenAI.
The business models to train & run models is not sustainable. If there is any money to be made it is NOW, while the speculation is highest. The nonprofit is just getting in the way.
This could be wishful thinking coz fuck corporate AI, but no one can deny AI is in a speculative bubble.
Take the hat off. This was the goal. Whoops, gotta cash in and leave! I’m sure it’s super great, but I’m gone.
That’s an excellent point! Why oh why would a tech bro start a non-profit? Its always been PR.
It honestly just never occurred to me that such a transformation was allowed/possible. A nonprofit seems to imply something charitable, though obviously that’s not the true meaning of it. Still, it would almost seem like the company benefits from the goodwill that comes with being a nonprofit but then gets to transform that goodwill into real gains when they drop the act and cease being a nonprofit.
I don’t really understand most of this shit though, so I’m probably missing some key component that makes it make a lot more sense.
classic pump and dump at thjs point. He wants to cash in while he can.
If you can’t make money without stealing copywritten works from authors without proper compensation, you should be shut down as a company
ai is such a dead end. it can’t operate without a constant inflow of human creations, and people are trying to replace human creations with AI. it’s fundamentally unsustainable. I am counting the days until the ai bubble pops and everyone can move on. although AI generated images, video, and audio will still probably be abused for the foreseeable future. (propaganda, porn, etc)
That is a good point, but I think I’d like to make the distinction of saying LLM’s or “generic model” is a garbage concept, which require power & water rivaling a small country to produce incorrect results.
Neural networks in general that can (cheaply) learn on their own for a specific task could be huge! But there’s no big money in that, since its not a consolidated general purpose product tech bros can flog to average consumers.
I’m sure they were dead weight. I trust open AI completely and all tech gurus named Sam. Btw, what happened to that Crypto guy? He seemed so nice.
I hope I won’t undermine your entirely justified trust but Altman is also a crypto guy, cf Worldcoin. /$
He is taking a time out with a friend in an involuntary hotel room.
With Puff Daddy? Tech bros do the coolest stuff.
If you want to get really mad, read On The Edge by Nate Silver.
I’m confused, how can a company that’s gained numerous advantages from being non-profit just switch to a for-profit model? Weren’t a lot of the advantages (like access to data and scraping) given with the stipulation that it’s for a non-profit? This sounds like it should be illegal to my brain
Careful you’re making too much sense here and overlapping with Elmo’s view on the subject
A stopped clock is still correct twice a day.
And OpenAI is still less correct than a broken clock.
Guess I’m out of the loop. Who’s Elmo?
Musk
angry Sesame Street noises
Elongated Muskrat
Leon.
I’m confused, how can a company that’s gained numerous advantages from being non-profit just switch to a for-profit model
Money
USA tho
Can’t do crimes if you’re rich. It’s in the Constitution
Money doesn’t have any advantages in other countries? When did that happen?
I don’t see where I said that.
You can no longer make the same connection you did earlier?
the person that you’re replying to said something that’s true about the USA. they didn’t say anything about other countries.
for another example, i can say “if you’re in the USA, then the current year is 2024” and that statement will be true. it is also true in every other country (for the moment), but that’s besides the point.
And I replied that it’s also true in other countries, it’s not a problem only the US has. It’s not besides the point. It’s acting as if only the US has the problem.
And I specifically mentioned the USA because that’s the country where OpenAI operates and where the events in the article take place, so if someone asks why it’s so easy for OpenAI to go from being a nonprofit to a for-profit company (this was the issue I was responding to, not some general question about whether money has influence around the world), it’s the laws of the USA that are relevant, not the laws of other countries.
Money and purchasing the right people.
Their non-profit status had nothing to do with the legality of their training data acquisition methods. Some of it was still legal and some of it was still illegal (torrenting a bunch of books off a piracy site).
Well maybe not on paper but they did leverage it a lot when questioned
These people claimed their product can pass the bar exam (it was a lie). Tells you how they feel about the legal system
They should be required to change their name
It’s amusing. Meta’s AI team is more open than "Open"AI ever was - they publish so many research papers for free, and the latest versions of Llama are very capable models that you can run on your own hardware (if it’s powerful enough) for free as long as you don’t use it in an app with more than 700 million monthly users.
It’s the famous “As long as your not Google, Amazon or Apple” licence.
which seems like a decent license idea to me
Everything should be licensed like that
Needs Microsoft added to the list.
That’s because Facebook is selling your data and access to advertise to you. The better AI gets across the board, the more money they make. AI isn’t the product, you are.
OpenAI makes money off selling AI to others. AI is the product, not you.
The fact facebook release more code, in this instance, isn’t a good thing. It’s a reminder how fucked we all are because they make so much off our personal data they can afford to give away literally BILLIONS of dollars in IP.
Facebook doesn’t sell your data, nor does Google. That’s a common misconception. They sell your attention. Advertisers can show ads to people based on some targeting criteria, but they never see any user data.
They may also sell the data.
I bet the NSA backdoor isn’t free.
Selling your data would be stupid, because they make money with the fact that they have data about you nobody else has. Selling it would completely break their business model.
Depends why they are selling it, to whom, and under what restrictions.
Yes, they don’t make the majority of their money from selling actual data, but that doesn’t mean they don’t do it.
SkyNet.
Please take no offense in this, I will probably not use your name suggestions, SatansMaggotyCumFart
I’m deeply offended.
I mean killer robots from the future could solve many problems. I can elaborate, but you’re going to have to think 4th dimensionally.
Could solve a lot of problems for the rich, that’s for sure.
Canceled my sub as a means of protest. I used it for research and testing purposes and 20$ wasn’t that big of a deal. But I will not knowingly support this asshole if whatever his company produces isn’t going to benefit anyone other than him and his cronies. Voting with our wallets may be the very last vestige of freedom we have left, since money equals speech.
I hope he gets raped by an irate Roomba with a broomstick.
Whoa, slow down there bruv! Rape jokes aren’t ok - that Roomba can’t consent!
“Private Stabby reporting for duty!”
Good. If people would actually stop buying all the crap assholes are selling we might make some progress.
But I will not knowingly support this asshole if whatever his company produces isn’t going to benefit anyone other than him and his cronies.
I mean it was already not open-source, right?
AI is the ultimate Enshitification of the world.
Maybe the digital world. We could always go back to living in the real world I guess.
oh no i hate that place. i’m scared
Things easily could be better for the vast majority of us in the present day, but let’s not forget how shit we were in the past as well.
ClosedAI. Or maybe MircroAI?
LLMs, maybe. Most AI is useful
I love how ppl who don’t have a clue what AI is or how it works say dumb shit like this all the time.
I also love making sweeping generalizations about a stranger’s knowledge on this forum. The smaller the data sample the better!
The base comment was very broad
There is no AI. It’s all shitty LLM’s. But keep sucking that techbro cheesy balls. They will never invite you to the table.
Honest question, but aren’t LLM’s a form of AI and thus…Maybe not AI as people expect, but still AI?
The issue is that “AI” has become a marketing buzz word instead of anything meaningful. When someone says “AI” these days, what they’re actually referring to is “machine learning”. Like in LLMs for example: what’s actually happening (at a very basic level, and please correct me if I’m wrong, people) is that given one or more words/tokens, it tries to calculate the most probable next word/token based on its model (trained on ridiculously large numbers of bodies of text written by humans). It does this well enough and at a large enough scale that the output is cohesive, comprehensive, and useful.
While the results are undeniably impressive, this is not intelligence in the traditional sense; there is no reasoning or comprehension, and definitely no consciousness, or awareness here. To grossly oversimplify, LLMs are really really good word calculators and can be very useful. But leave it to tech bros to make them sound like the second coming and shove them where they don’t belong just to get more VC money.
Sure, but people seem to buy into that very buzz wordyness and ignore the usefulness of the technology as a whole because “ai bad.”
True. Even I’ve been guilty of that at times. It’s just hard right now to see the positives through the countless downsides and the fact that the biggest application we’re moving towards seems to be taking value from talented people and putting it back into the pockets of companies that were already hoarding wealth and treating their workers like shit.
So usually when people say “AI is the next big thing”, I say “Eh, idk how useful an automated idiot would be” because it’s easier than getting into the weeds of the topic with someone who’s probably not interested haha.
Edit: Exhibit A
There’s some sampling bias at play because you don’t hear about the less flashy examples. I use machine learning for particle physics, but there’s no marketing nor outrage about it.
No, they are auto complete functions of varying effectiveness. There is no “intelligence”.
Almost as if it’s artificial.
Ah, Mr Donning Kruger, it’s nice to meet you.
There you go, talking into the mirror once more.
Altman downplayed the major shakeup.
"Leadership changes are a natural part of companies
Is he just trying to tell us he is next?
/s
Sam: “Most of our execs have left. So I guess I’ll take the major decisions instead. And since I’m so humble, I’ll only be taking 80% of their salary. Yeah, no need to thank me”
unironically, he ought to be next, and he better know it, and he better go quietly
We need a scapegoat in place when the AI bubble pops, the guy is applying for the job and is a perfect fit.
He is happy to be scapegoat as long as exit with a ton of money.
The ceo at my company said that 3 years ago, we are going through execs like I go through amlodipine.
Just making structural changes sound like “changing the leader”.
They always are and they know it.
Doesn’t matter at that level it’s all part of the game.
The restructuring could turn the already for-profit company into a more traditional startup and give CEO Sam Altman even more control — including likely equity worth billions of dollars.
I can see why he would want that, yes. We’re supposed to ooo and ahh at a technical visionary, who is always ultimately a money guy executive who wants more money and more executive power.
I saw an interesting video about this. It’s outdated (from ten months ago, apparently) but added some context that I, at least, was missing - and that also largely aligns with what you said. Also, though it’s not super evident in this video, I think the presenter is fairly funny.
That was a worthwhile watch, thank you for making my life better.
I await the coming AI apocalypse with hope that I am not awake, aware, or sensate when they do whatever it is they’ll do to use or get rid of me.
You will be kept alive at subsistence level to buy the stuff you’ve been told to buy, don’t worry.
Yeah but what about the future?
My pleasure! Glad it helped. Also, I like your username.
I’m still not sure how much to fear AI, as I’m not knowledgeable on the subject (never even intentionally interacted with one yet) and have seen conflicting reports on how worryingly capable it is. Today I did see this video, which isn’t explicitly about AI but did offer an interesting perspective that could be compared to the paradigm: https://youtu.be/fVN_5xsMDdg
(Warning, the video was interesting, but I got invested about halfway through when I started comparing it to AI, then was disappointed in the ending)
deleted by creator
They had an opportunity to deal with this earlier this year when he was FIRED
The actual employees threatened to resign en masse, because the employees own equity in the company and want this dogshit move too.
Greed is the fundamental flaw that makes humanity awful.
Why would they own equity in a non-profit?
Because this was always the plan.
In theory there would be no profits to distribute, but there would be control of direction via voting rights.
deleted by creator
paid for entirely by venture capital seed funding.
And stealing from other people’s works. Don’t forget that part
Nothing got stolen…this lie gets old.
When individual copyright violations are considered “theft” by the law (and the RIAA and the MPAA), violating copyrights of billions of private people to generate profit, is absolutely stealing. While the former arguably is arguably often a measure of self defense against extortion by copyright holding for-profit enterprises.
They used copyrighted works without permission
Right, it’s only stolen when regular people use copyright material without permission
But when OpenAI downloads a car, it’s all cool baby
Barely usable results?! Whatever you may think of the pricing (which is obviously below cost), there are an enormous amount of fields where language models provide insane amount of business value. Whether that translates into a better life for the everyday person is currently unknown.
barely usable results
Using chatgpt and copilot has been a huge productivity boost for me, so your comment surprised me. Perhaps its usefulness varies across fields. May I ask what kind of tasks you have tried chatgpt for, where it’s been unhelpful?
Literally anything that requires knowing facts to inform writing. This is something LLMs are incapable of doing right now.
Just look up how many R’s are in strawberry and see how chat gpt gets it wrong.
Okay what the hell is wrong with it
It took me three times to convince it that there’s 3 r’s in strawberry…
Because that’s not how LLMs work.
When you form a sentence you start with an intent.
LLMs start with the meaning you gave it, and tries to express something similar to you.
Notice how intent, and meaning aren’t the same. Fact checking has nothing to do with what a word means. So how can it understand what is true?
All it did was take the meaning of looking for a number and strawberries and ran it’s best guess from that.
May I ask what kind of tasks…
No, you may not.
Oh my god get better takes before I stick a pickaxe in my eye
do it genius
Did the AI suggest you do that? Better ask it!
Yes it says aim for the brain stem but like most things it says, I already knew that. Finally quietness from the hearing the same thing over and over and over and over
Have a good trip back to .ml land
You think I remember my sign up server or that it matters in any way at all ?
I shouldn’t laugh at brain damage, but this is hilarious.
I suggest you touch grass if you think remembering some social media server web address that the phone remember.
But also if you want to discriminate based on what server a user used to sign up, then it’s already too late for you
Most stable .ml user
but like most things it says, I already knew that
So how long have you been putting glue on your pizza?
They’re from Lemmy.ml, they just drink it straight from the bottle
That’s Google and it’s also called being able to tell reality apart from fiction, which is becoming clear most anti ai zealots have never been capable of.
You seem to have forgotten your previous post:
Yes it says aim for the brain stem but like most things it says, I already knew that.
So either you already knew to put glue on pizza or you knew that the AI isn’t trustworthy in the first place. You can’t have it both ways.
Please do. Stream it too so we all can enjoy.
That’s not the incentive you think it is.
Make sure you go deep. Need to get the whole thing to real show you’re serious.
Get to it then. 🤷♂️
I really don’t understand why they’re simultaneously arguing that they need access to copyrighted works in order to train their AI while also dropping their non-profit status. If they were at least ostensibly a non-profit, they could pretend that their work was for the betterment of humanity or whatever, but now they’re basically saying, “exempt us from this law so we can maximize our earnings.” …and, honestly, our corrupt legislators wouldn’t have a problem with that were it not for the fact that bigger corporations with more lobbying power will fight against it.
They realized that they can get away with stealing data. No reason to keep up the facade anymore
Greed.
There is no law that covers training.
You guys are the ones demanding a law that doesn’t exist.
And there it goes the tech company way, i.e. to shit.
They speed ran becoming an evil corporation.
I always steered clear of OpenAI when I found out how weird and culty the company beliefs were. Looked like bad news.
I mostly watch to see what features open source models will have in a few months.
Ah, but one asshole gets very rich in the process, so all is well in the world.
Perfectly balanced, as all things should be.
Trust me, I’m a tech bro.
At least TSMC realizes that
https://www.digitaltrends.com/computing/tsmc-rejects-podcasting-bro-sam-altman-openai/
TSMC’s leadership dismissed Altman as a “podcasting bro” and scoffed at his proposed $7 trillion plan to build 36 new chip manufacturing plants and AI data centers.
This is how we get Terminators in this timeline?!