Wait, are you advocating people blindly trust unreliable sources and then get angry at the unreliable source when it turns out to be unreliable rather than learn from shit like this to avoid becoming a victim?
Google has spent a fortune to convince people they are a reliable source. This is clearly on google, not the people who aren’t tech savvy.
Ok, I agree that Google isn’t a good guy in this situation, but that doesn’t mean advice to not just trust what Google says is invalid. It also doesn’t absolve Google of their accidental or deliberate inaccuracies.
It was just a “In case you didn’t know, don’t just trust Google even though they’ve worked so hard at building a reputation of being trustworthy and even seemed pretty trustworthy in the past. Get a phone number from the company’s website.”
And then I’ll add on: regardless of where you got the phone number from, be skeptical if someone asks you for your banking information or other personal information that isn’t usually involved in such a service. Not because you’ll be the bad guy if you do get scammed, but to avoid going through it because it’s at least going to be a pain in the ass to deal with, if not a financially horrible situation to go through if you are unable to get it reversed.
are you advocating people blindly trust unreliable sources
Where did I say this? I didn’t say this. You said I said this.
I don’t see any blaming of anyone in the original comment you replied to but just general advice to avoid falling for a scam like this. There isn’t even a victim in this case because the asking for banking info tipped them off if I’m understanding the OP correctly.
So I’m confused about what specifically you are objecting to in the original comment and if it is the general idea that you shouldn’t blindly trust results given by Google’s LLM, which isn’t known for its reliability.
Remember when 4chan got people to microwave their phones because they got them to believe it would charge it?
If calling those people stupid is victim blaming then so be it. I’m blaming the victim.
This case isn’t as clear as that but even before the AI mania the instant answer at the top of Google results was frequently incorrect. Being able to discern BS from real results has always been necessary and AI doesn’t change that.
I’ve been using Kagi this year and it keeps LLM results out of the way unless I want them. When you open their AI assistant it says
Assistant can make mistakes. Think for yourself when using it.
I think that sums it up nicely.