I mean. This whole “article” is an opinion piece. Some of the opinions I even agreed with but there’s a lot of “I think” involved in a lot of the paragraphs written here.
That’s the case for all predictions on the future of AI. They are all just opinions, some more informed than others. The author does clearly cite some of the things that are informing their opinion in this case, so I’m not sure what your problem is there. I’m also not sure why you put article in quotation marks; this is clearly an article. Your comment seems like a lazy attempt to discredit the piece and shutdown discussion without bothering to respond to any of your real problems with it.
They probably just dislike Vox.
No, I’m not a fan of vox. But on the other hand this particular article might as well have been a random blog post. I don’t necessarily disagree that people calling Generative AI a bust are jumping the gun. And I kind of even agree that using these generative models in applications that solve problems is going to take time, I don’t necessarily agree that just because a bunch of users have fixated on the new shiny thing that it will have staying power or that it will achieve a level of usefulness that will translate to long term profitability.
But mostly I take exception to an article positing itself as factually starting every paragraph with “I think”.
But most I take exception to an article positing itself as factually
Where did it do that? The author writes in first person throughout, it is clearly an opinion piece.
Because this article is posited (with its title and the little blurb at the top about the author) to be about the safety of AI. The author doesn’t talk about what safety regulations there are. They don’t talk about what safety apparatus are being proposed or which ones have already been developed. There’s no conclusion here.
When you read a newspaper, generally there is a section for opinion pieces and editorials. There are several groups trying to push for clear and concise labeling of editorial, opinion pieces, and news pieces specifically because there’s so much misinformation going around.
But really. What is the point of posting an opinion piece to a community where we share tech news, when it’s not even valuable in its opinions? What is there to discuss here? That shareholders and consumers should view AI safety legislation or safety protocols differently because they affect those two parties differently? We already knew that.
Because this article is posited (with its title and the little blurb at the top about the author) to be about the safety of AI.
Unless the title and blurb have changed, this is just wrong.
The title says nothing about safety: “How AI’s booms and busts are a distraction - However current companies do financially, the big AI safety challenges remain.”
Likewise the blurb says nothing about safety: “Kelsey Piper is a senior writer at Future Perfect, Vox’s effective altruism-inspired section on the world’s biggest challenges. She explores wide-ranging topics like climate change, artificial intelligence, vaccine development, and factory farms, and also writes the Future Perfect newsletter.”
What are you going on about? You’re mad because you couldn’t tell this was on Op/Ed?
(Sidenote: I didn’t notice that “effective altruism” thing before. Barf.)
The blurb suggests that this person writes specifically altruist articles (a suggestion that this is for the benefit of someone which by proxy suggests that it’s telling the truth). Because opinions are subjective that conflicts with the context of the piece pretty harshly. It gives the idea that it may in some way be an opinion based in on fact when it simply isn’t because it cites no factual data that can be quantified whatsoever. This is literally how misinformation is spread. It doesn’t have to be outright lies in order to be damaging.
The article talks about how new safety measures could be developed. It’s in the text. It just doesn’t conclude anything or talk about any specifics. That’s really my problem with it. What good is the opinion of the author? What are they basing this opinion on? There’s no substance to this writing at all.
lol what?
There’s no way to write an article with that title and not have it be an opinion.
When someone starts every paragraph with “I think”, they’re not positing themselves as factual.
And? That’s not really helpful. It adds nothing to the conversation about Generative AI. It is a list of opinions and they’re based on seemingly nothing. You’re arguing with me about whether or not this is an opinion piece that is obviously an opinion piece because it doesn’t validate itself in any way. There’s literally nothing to discuss here.
deleted by creator
How is it delayed and slow? We recently got an update to Llama 3, a new Mistral model and Flux for generating images. All of them are a big step forward and I regularly see how big advancements are being made. I can see with my own eyes how they’re getting more capable by the day…
So yeah, the bubble is probably going to burst. Because companies have pumped in lots of billions of dollars and inflated it to no end. That’s not sustainable. And I can see how your experience might be different if you limit yourself to using ChatGPT. Because people want a big surprise with version 5 for some time now. But that’s just hype. And has nothing to do with technology. Or competing products.
Yeah I keep seeing stories about how doomed AI companies are. Apparently if they don’t create AGI then it was all for nothing and it’s trash.
I think the main issue isn’t directly that, but that they payed $600 billion for things like NVidia hardware, electricity, experts, … And now they need to generate an absurd pile of money. That’s why they might be doomed, because their product now has to fulfill ludicrous expectations. Or the bubble is going to burst.
The thing is that generative A.I is not really a new thing, secondly the question is not whether the technology will be transformative rather than if the investors can be patient enough to see that.
When it comes to AGI generative A.I is probably part of it but I would guess we need breakthrough or two from other areas as well which could happen in next 5 years or take a decade or two.
Removed by mod