it’s seo games all over again
and the former kings of tuning the algorithms to favor seo that bubbled useful info up harder have thrown it all out in the name of impressing everyone with how space age and sci fi their tech is. it’s not about advancing science or even pushing a useful product. it’s strictly a tool for scams. is it a surprise that scammers are gaming the google scam better than anyone else? not really. they’ve always had a step up compared to the average internet denizen thanks to practice. this is why i get so frustrated when people dismiss ai skepticism as being a product of luddites.
- you’re getting scammed to think ai will benefit you
- systems built by scammers will always benefit scammers
- the luddites were right. scientific advancements should benefit the workers, not the rich
I mean, this isn’t specifically an AI issue, this is scammers updating the info in Google business listings because the airlines don’t actually care to maintain those pages (and Google doesn’t want actual humans doing any work to make sure their shit is accurate). This has been going on before AI, AI is just following the garbage in, garbage out model that everyone said was going to be the result of this push.
Your historical information is accurate, but I disagree with your framing. This particular scam is so powerful because the information is organized, parsed, and delivered in a fashion that makes it look professional and makes it look believable.
Google and the other AI companies have put themselves in mind. They know that their system is encouraging this type of scam, but they don’t dare put giant disclaimers at the top of every AI generated paragraph, because they’re trying to pretend that their s*** is good, except when it’s not, and then it’s not their fault. In other words, it’s basic dishonesty.
A-I-S-E-O
🎶 Old McDonald had a server farm… 🎶
And Gem’ni was his nam-i…g-e-m-n-i
(I know it is Gemini, but we need 2 syllables and 5 letters to fit the song parady, so I made a contraction)
Artistic license: validated
This is why “AI” should be avoided at all cost. It’s all bullshit. Any tool that “hallucinates” - I. E. Is error strewn - is not fit for purpose. Gaming the AI is just the latest example of the crap being spewed by these systems.
The underlying technology has its uses but its niche and focused applications, nowhere near as capable or as ready as the hype.
We don’t use Wikipedia as a primary source because it has to be fact checked. AI isn’t anywhere as near accurate as Wikipedia.so why use it?
The underlying technology has its uses
Yes indeed agreed.
Sometimes BS is exactly what I need! Like, hallucinated brainstorm suggestions can work for some workflows and be safe when one is careful to discard or correct them. Copying a comment I made a week ago:
I don’t love it for summarization. If I read a summary, my takeaway may be inaccurate.
Brainstorming is incredible. And revision suggestions. And drafting tedious responses, reformatting, parsing.
In all cases, nothing gets attributed to me unless I read every word and am in a position to verify the output. And I internalize nothing directly, besides philosophy or something. Sure can be an amazing starting point especially compared to a blank page.
https://ea.rna.nl/2024/05/27/when-chatgpt-summarises-it-actually-does-nothing-of-the-kind/
A good read about ai summarizing.
Super interesting, I have to get into all those links. Thank you!
Because some are lazy fucks
Gotta tell you, you made a fairly extreme pronouncement against a very general term / idea with this:
“AI” should be avoided at all cost
Do you realize how ridiculous this sounds? It sounds, to me, like this - “Vague idea I poorly understand (‘AI’) should be ‘avoided’ (???) with disregard for any negative consequences, without considering them at all”
Cool take you’ve got?
Edit to add: whoops! Just realized the community I’m in. Carry on, didn’t mean to come to the precise wrong place to make this argument lol.
Listen, I know that the term “AI” has been, historically, used to describe so many things to the point of having no meaning, but I think, given the context, it is pretty obvious what AI they are referring to.
Well, fair enough, folks seem to agree with you and that commenter. I’m not being deliberately uncharitable, “avoid AI at all costs” seems both poorly defined and hyperbolic to me, even given the context. Scams and inaccuracy are a problem in lots of situations, Google search results have been getting increasingly bad to the point of unusable for a while now (I’d argue long before LLM saturation), and I’ve personally been getting mileage with some LLMs, already at kind of an early stage, over wading through every crappy search result.
I wouldn’t call myself an enthusiast or on the hype train, I work in the industry. But it’s clearly useful, while clearly having many tradeoffs (energy use maybe much worse than inaccuracy / scam potential), and “avoid at all cost” is silly to me. But cheers, happy to simply disagree!
Wait until you hear about the AI’s programming abilities!
It “knows” that a Python program starts with some lines like: from (meaningless package name) include *
If you can register the package name it invents, your code could be running on some of the world’s biggest companies’ internal servers!
AI was launched with the promise of taking away the boring parts and letting us focus on the fun stuff.
In reality it takes away the fun stuff and gives us more boring things to do.
Being scammed isn’t boring. It is blood boiling and (wrongly) shame-filled.
But yeah, you are right. The boring and the bad.
Oh I agree with you there, I was talking about AI as a concept and how it was sold to us.
I understood and I agreed with you. I just wanted to point out that it is actually even worse…
But you are right
and that’s why I always go straight to the company website to find that info instead of googling it
I google for the company website (e.g. Wikipedia) and then I google with the site in mind for info. Works well
AI results are always so bad. I don’t like that there is AI medical results. That needs more pushback.
Ironically, that is possibly one of the few legit uses.
Doctors can’t learn about every obscure condition and illness. This means they can miss the symptoms of them for a long time. An AI that can check for potential matches to the symptoms involved could be extremely useful.
The provisio is that it is NOT a replacement for a doctor. It’s a supplement that they can be trained to make efficient use of.
Couldn’t that just as easily be solved with a database of illnesses which can be filtered by symptoms?
That requires the symptoms to be entered correctly, and significant effort from (already overworked) doctors. A fuzzy logic system that can process standard medical notes, as well as medical research papers would be far more useful.
Basically, a quick click, and the paperwork is scanned. If it’s a match for the “bongo dancing virus” or something else obscure, it can flag it up. The doctor can now invest some effort into looking up “bongo dancing virus” to see if it’s a viable match.
It could also do it’s own pattern matching. E.g. if a particular set of symptoms is often followed 18-24 hours later by a sudden cardiac arrest. Flagging this up could be completely false. However, it could key doctors in on something more serious happening, before it gets critical.
An 80% false positive is still quite useful, so long as the 20% helps and the rest is easy for a human to filter.
The key is considering who is going to be using these systems. Certainly Google search AI is never going to be useful in this way because the kind of info a patient needs is very different to what a doctor would find useful.
And if we do make systems for doctors, then it’s pretty damn important that we consider things like you have, taking into account that doctors are already overwhelmed and spending way too much effort juggling medical notes. I read a thing a while back which highlighted how many doctors are struggling with information management and processing all the info they need to because of how IT systems have tended to be enforced on them from the top down, with some doctors even saying paper notes were far easier to deal with (especially for complex cases). Digitisation definitely has huge benefits, but it seems like the needs of doctors have been largely ignored.
Even besides doctors, I feel like the field of Human-Computer Interaction (HCI) has been way too focussed on ways of wringing out more money from people, with not enough focus put on how we can make technology that empowers people. It’s no wonder why: If I were a HCI researcher, I know what kind of project would be more likely to get research funding, and it’s the ruthlessly capitalistic ones.
“An 80% false positive is still quite useful, so long as the 20% helps and the rest is easy for a human to filter.”
This gets at a key point, in my opinion — even when one ignores the straightforwardly scammy “AI” nonsense, a lot that remain are still overly focussed on building systems that do stuff for people (usually in a way that would eliminate or reduce people in the process. Many examples of this exist, but one is “AI teachers” which still requires a human in the room, but only as a “learning facilitator” or some nonsense). I work in a field where machine learning has been a prominent thing for years, so I’m in a weird place of being sick of hearing about AI, and also impressed by what we do have. Mainly though, I’m exasperated because we could be doing so much more with the tech we have if we made tools that were intended to be used by humans.
Humans are dumb and emotional and silly, but we are also pretty cool and we can make awesome things when given the opportunity to. I will always be cynical about tech that seems over keen to cut humans out of things
In either case, a real doctor would be reviewing the results. Nobody is going to authorize surgeries or prescription meds from AI alone.
Yet again tech companies are here to ruin the day. LLM are such a neat little language processing tool. It’s amazing for reverse looking up definitions (where you know the concept but can’t remember some dumb name) or when looking for starting points or want to process your ideas and get additional things to look at, but most definitely not a finished product of any kind. Fuck tech companies for selling it as a search engine replacement!
It is great at search. See this awesome example I hit just today from Google’s AI overview:
Housing prices in the United States dropped significantly between 2007 and 2020 due to the housing bubble and the Great Recession:
2007: The median sales price for a home in the first quarter of 2007 was $257,400. The average price of a new home in September 2007 was $240,300.
2020: The average sales price for a new home in 2020 was $391,900.
See, without AI I would have thought housing prices went up between 2007, and that $391,900 was a bigger number than $257,400.
That’s why you always get it from their website. Never trust a LLM to do a search engine’s job.
Respectfully, this is victim blaming. Criticize Google, not end users.
Wait, are you advocating people blindly trust unreliable sources and then get angry at the unreliable source when it turns out to be unreliable rather than learn from shit like this to avoid becoming a victim?
Google has spent a fortune to convince people they are a reliable source. This is clearly on google, not the people who aren’t tech savvy.
Ok, I agree that Google isn’t a good guy in this situation, but that doesn’t mean advice to not just trust what Google says is invalid. It also doesn’t absolve Google of their accidental or deliberate inaccuracies.
It was just a “In case you didn’t know, don’t just trust Google even though they’ve worked so hard at building a reputation of being trustworthy and even seemed pretty trustworthy in the past. Get a phone number from the company’s website.”
And then I’ll add on: regardless of where you got the phone number from, be skeptical if someone asks you for your banking information or other personal information that isn’t usually involved in such a service. Not because you’ll be the bad guy if you do get scammed, but to avoid going through it because it’s at least going to be a pain in the ass to deal with, if not a financially horrible situation to go through if you are unable to get it reversed.
are you advocating people blindly trust unreliable sources
Where did I say this? I didn’t say this. You said I said this.
I don’t see any blaming of anyone in the original comment you replied to but just general advice to avoid falling for a scam like this. There isn’t even a victim in this case because the asking for banking info tipped them off if I’m understanding the OP correctly.
So I’m confused about what specifically you are objecting to in the original comment and if it is the general idea that you shouldn’t blindly trust results given by Google’s LLM, which isn’t known for its reliability.
For me it’s the idea of focusing at all on telling people not to trust LLMs as opposed to criticizing companies for putting them prominently on the top of the page.
Why not both? Plus, not just trusting LLMs is something any of us can decide to do on our own.
Because the average person doesn’t even know what an LLM is or what it even stands for and putting a misinformation generator at the top of search pages is irresponsible.
Like, if something is so unreliable with information that you have to say “don’t trust what this thing says” but you still put it at the top of the page? Come on… It’s like putting a self destruct button in a car and telling people “well the label says not to push it!”
Remember when 4chan got people to microwave their phones because they got them to believe it would charge it?
If calling those people stupid is victim blaming then so be it. I’m blaming the victim.
This case isn’t as clear as that but even before the AI mania the instant answer at the top of Google results was frequently incorrect. Being able to discern BS from real results has always been necessary and AI doesn’t change that.
I’ve been using Kagi this year and it keeps LLM results out of the way unless I want them. When you open their AI assistant it says
Assistant can make mistakes. Think for yourself when using it.
I think that sums it up nicely.
Would that make Google liable? I mean that wouldn’t be a case of users posting information that would be a case of Google posting information in that case wouldn’t it? So it seems to me they’d be legally liable at that point.
Ah but Google is a giant company and as is U.S. law doesn’t have to fave consequences for anything
In a sane world? Yes.
I think there’s a disclaimer with all AI summaries. Although, I just tried googling United airlines and there is no longer an AI summary, only a message directly from their website.
Yes, so, no.
Honestly I wanted to write a snug comment about “But but it even says AI can sometimes make mistakes!”, but after clicking through multiple links and disclaimers I can’t find Google actually admitting that
That is literally the worst use case for AIs. There’s no way they should be letting it provide contact info like that.
Also they’re stupid for dialing a random number.
They gave up working search with algorithms that are easier to reason about and correct for with a messy neural network that is broken in so many ways and basically impossible to generally correct while retaining its core characteristics. A change with this many regressions should’ve never been pushed to production.
Same happened to my wife. She gave them enough info that they threatened to call and cancel her flight unless she paid them. Never did cancel it
Google has been sponsoring scammers as first search results since its creation. Google has caused hundreds of millions of dollars in losses to people, and need to be sued for it.
I feel like Jason ought to have considered it. Spammers have been using this kind of tactic for decades. Of course they’re going to change to whatever medium is popular.