their first mistake was using united airlines. or airlines period. fuck flying
edit: LOL i actually love how people take personal offense every time i say “fuck flying”
so, just so we’re all clear: fuck flying
this is always the question. and the answer is if i can’t get there in time by not flying, then so sorry, i won’t be able to attend. i don’t fly, because fuck literally everything involved with flying. which apparently now also includes bogus customer service phone numbers
Drag is making sure to eat one rock per day and put glue in pizza just like the Google AI says!
I’m sorry but this has been a thing long before “AI” based results. Scammers always used tricks to end up at the top of search results.
Scammers have been a thing long before writing. That doesn’t mean people shouldn’t be made aware of new ways to be scammed.
One could argue there is a new aspect. When it comes to retraining the public on what to trust, there’s a likely blind spot where a person may know to only call a number listed on a trusted website. So they’ll check to see if they’re on a bank domain before picking up the phone. If Google, being a big name, presents the number in an official looking way at the top of their pages, it may pass the sniff test. And get people into trouble.
Featured snippets would prominently display source URLs:
But AI summaries? More opaque:
At the top, but that isn’t what this post is saying. This is saying that Google’s AI gave the scammer answer. Not that they provided a link you could click on, but that Google itself said this is the number.
It’s not an AI, it’s just word prediction, which also just follows stupid algorithms, just like those who determine search results. Both can be tricked / manipulated if you understand how they work. It’s still the same principle for both cases.
Regardless of what they call it, they’re the ones presenting it. I’m not arguing they can’t be tricked. I’m arguing they are fundamentally different concepts. One is offering you a choice of sources, the other is making a claim. That’s a pretty big distinction in a whole mess is different ways. Not the least of which is legal.
This is why “AI” should be avoided at all cost. It’s all bullshit. Any tool that “hallucinates” - I. E. Is error strewn - is not fit for purpose. Gaming the AI is just the latest example of the crap being spewed by these systems.
The underlying technology has its uses but its niche and focused applications, nowhere near as capable or as ready as the hype.
We don’t use Wikipedia as a primary source because it has to be fact checked. AI isn’t anywhere as near accurate as Wikipedia.so why use it?
Gotta tell you, you made a fairly extreme pronouncement against a very general term / idea with this:
“AI” should be avoided at all cost
Do you realize how ridiculous this sounds? It sounds, to me, like this - “Vague idea I poorly understand (‘AI’) should be ‘avoided’ (???) with disregard for any negative consequences, without considering them at all”
Cool take you’ve got?
Edit to add: whoops! Just realized the community I’m in. Carry on, didn’t mean to come to the precise wrong place to make this argument lol.
Listen, I know that the term “AI” has been, historically, used to describe so many things to the point of having no meaning, but I think, given the context, it is pretty obvious what AI they are referring to.
Well, fair enough, folks seem to agree with you and that commenter. I’m not being deliberately uncharitable, “avoid AI at all costs” seems both poorly defined and hyperbolic to me, even given the context. Scams and inaccuracy are a problem in lots of situations, Google search results have been getting increasingly bad to the point of unusable for a while now (I’d argue long before LLM saturation), and I’ve personally been getting mileage with some LLMs, already at kind of an early stage, over wading through every crappy search result.
I wouldn’t call myself an enthusiast or on the hype train, I work in the industry. But it’s clearly useful, while clearly having many tradeoffs (energy use maybe much worse than inaccuracy / scam potential), and “avoid at all cost” is silly to me. But cheers, happy to simply disagree!
The underlying technology has its uses
Yes indeed agreed.
Sometimes BS is exactly what I need! Like, hallucinated brainstorm suggestions can work for some workflows and be safe when one is careful to discard or correct them. Copying a comment I made a week ago:
I don’t love it for summarization. If I read a summary, my takeaway may be inaccurate.
Brainstorming is incredible. And revision suggestions. And drafting tedious responses, reformatting, parsing.
In all cases, nothing gets attributed to me unless I read every word and am in a position to verify the output. And I internalize nothing directly, besides philosophy or something. Sure can be an amazing starting point especially compared to a blank page.
https://ea.rna.nl/2024/05/27/when-chatgpt-summarises-it-actually-does-nothing-of-the-kind/
A good read about ai summarizing.
That is literally the worst use case for AIs. There’s no way they should be letting it provide contact info like that.
Also they’re stupid for dialing a random number.