When German journalist Martin Bernklautyped his name and location into Microsoft’s Copilot to see how his articles would be picked up by the chatbot, the answers horrified him. Copilot’s results asserted that Bernklau was an escapee from a psychiatric institution, a convicted child abuser, and a conman preying on widowers. For years, Bernklau had served as a courts reporter and the AI chatbot had falsely blamed him for the crimes whose trials he had covered.

The accusations against Bernklau weren’t true, of course, and are examples of generative AI’s “hallucinations.” These are inaccurate or nonsensical responses to a prompt provided by the user, and they’re alarmingly common. Anyone attempting to use AI should always proceed with great caution, because information from such systems needs validation and verification by humans before it can be trusted.

But why did Copilot hallucinate these terrible and false accusations?

14 points

The problem is not the AI. The problem is the huge numbers of morons who deploy AI without proper verfication and control.

permalink
report
reply
3 points

Sure, and also people using it without knowing that it’s glorifies text completion. It finds patterns, and that’s mostly it. If your task involves pattern recognition then it’s a great tool. If it requires novel thought, intelligence, or the synthesis of information, then you probably need something else.

permalink
report
parent
reply
1 point

Yeah, just like the thousands or millions of failed IT projects. AI is just a new weapon you can use to shoot yourself in the foot.

permalink
report
parent
reply
17 points

It’s a fucking Chinese Room, Real AI is not possible. We don’t know what makes humans think, so of course we can’t make machines do it.

permalink
report
reply
-11 points

You forgot the ever important asterisk of “yet”.

Artificial General Intelligence (“Real AI”) is all but guaranteed to be possible. Because that’s what humans are. Get a deep enough understanding of humans, and you will be able to replicate what makes us think.

Barring that, there are other avenues for AGI. LLMs aren’t one of them, to be clear.

permalink
report
parent
reply
3 points

We’re not making any progress until we accept that Penrose was right

permalink
report
parent
reply
13 points

I actually don’t think a fully artificial human like mind will ever be built outside of novelty purely because we ventured down the path of binary computing.

Great for mass calculation but horrible for the kinds of complex pattern recognitions that the human mind excels at.

The singularity point isn’t going to be the matrix or skynet or AM, it’s going to be the first quantum device successfully implanted and integrated into a human mind as a high speed calculation sidegrade “Third Hemisphere.”

Someone capable of seamlessly balancing between human pattern recognition abilities and emotional intelligence while also capable of performing near instant multiplication of matrices of 100 entries of length in 15 dimensions.

permalink
report
parent
reply
0 points

When we finally stop pretending Orch-OR is pseudoscience we’ll figure it out

permalink
report
parent
reply
1 point

is all but guaranteed to be possible

It’s more correct to say it “is not provably impossible.”

permalink
report
parent
reply
1 point

The human brain works. Even if we are talking about wetware 1k years in our future, that would still mean is possible.

permalink
report
parent
reply
10 points

I don’t think the Chinese room is a good analogy for this. The Chinese room has a conscious person at the center. A better analogy might be a book with a phrase-to-number conversion table, a couple number-to-number conversion tables, and finally a number-to-word conversion table. That would probably capture transformer’s rigid and unthinking associations better.

permalink
report
parent
reply
76 points

It’s frustrating that the article deals treats the problem like the mistake was including Martin’s name in the data set, and muses that that part isn’t fixable.

Martin’s name is a natural feature of the data set, but when they should be taking about fixing the AI model to stop hallucinations or allow humans to correct them, it seems the only fix is to censor the incorrect AI response, which gives the implication that it was saying something true but salacious.

Most of these problems would go away if AI vendors exposed the reasoning chain instead of treating their bugs as trade secrets.

permalink
report
reply
14 points

just shows that these “ai”'s are completely useless at what they are trained for

permalink
report
parent
reply
29 points

They’re trained for generating text, not factual accuracy. And they’re very good at it.

permalink
report
parent
reply
7 points

reasoning chain

Do LLMs actually have a reasoning chain that would be comprehensible to users?

permalink
report
parent
reply
2 points

https://learnprompting.org/docs/intermediate/chain_of_thought

It’s suspected to be one of the reasons why Claude and OpenAI’s new o1 model is so good at reasoning compared to other llm’s.

It can sometimes notice hallucinations and adjust itself, but there’s also been examples where the CoT reasoning itself introduce hallucinations and makes it throw away correct answers. So it’s not perfect. Overall a big improvement though.

permalink
report
parent
reply
45 points

Or just stop using buggy AIs for everything.

permalink
report
parent
reply
24 points

The AI did not “decide” anything. It has no will. And no understanding of the consequences of any particular “decision”. But I guess “probabilistic model produces erroneous output” wouldn’t get as many views. The same point could still be made about not placing too much trust on the output of such models. Let’s stop supporting this weird anthropomorphizing of LLMs. In fact we should probably become much more discerning in using the term “AI”, because it alludes to a general intelligence akin to human intelligence with all the paraphernalia of humanity: consciousness, will, emotions, morality, sociality, duplicity, etc.

permalink
report
reply
2 points
*

the AI “decided” in the same way the dice “decided” to land on 6 and 4 and screw me over. the system made a result using logic and entropy. With AI, some people are just using this informal way of speaking (subconsciously anthropomorphising) while others look at it and genuinely beleave or want to pretend its alive. You can never really know without asking them directly.

Yes, if the intent is confusion, it is pretty minipulative.

permalink
report
parent
reply
2 points

Granted, our tendency towards anthropomorphism is near ubiquitous. But it would be disingenuous to claim that it does not play out in very specific and very important ways in how we speak and think about LLMs, given that they are capable of producing very convincing imitations of human behavior. And as such also produce a very convincing impression of agency. As if they actually do decide things. Very much unlike dice.

permalink
report
parent
reply
1 point

A doll is also designed to be anthropomorphised, to have life projected onto it. Unlike dolls, when someone talks about LLMs as alive, most people have no clue if they are pretending or not. (And marketers take advantage of it!) We are feed a culture that accedentially says “chatGPT + Boston Dynamics robot = Robocop”. Assuming the only fictional part is that we dont have the ability to make it, not that the thing we create wouldn’t be human (or even be need to be human).

permalink
report
parent
reply
40 points

I’d love to see more AI providers getting sued for the blatantly wrong information their models spit out.

permalink
report
reply
-18 points

I don’t think they should be liable for what their text generator generates. I think people should stop treating it like gospel. At most, they should be liable for misrepresenting what it can do.

permalink
report
parent
reply
13 points

If they aren’t liable for what their product does, who is? And do you think they’ll be incentivized to fix their glorified chat boxes if they know they won’t be held responsible for if?

permalink
report
parent
reply
-11 points

Their product doesn’t claim to be a source of facts. It’s a generator of human-sounding text. It’s great for that purpose and they’re not liable for people misusing it or not understanding what it does.

permalink
report
parent
reply
1 point

If they aren’t liable for what their product does, who is?

The users who claim it’s fit for the purpose they are using it for. Now if the manufacturers themselves are making dodgy claims, that should stick to them too.

permalink
report
parent
reply
27 points

So you don’t think these massive megacompanies should be held responsible for making disinformation machines? Why not?

permalink
report
parent
reply
-5 points

because when you provide computer code for money you don’t want there to be any liability assigned

permalink
report
parent
reply
1 point

Yeah, all these systems do is worsen the already bad signal/noise ratio in online discourse.

permalink
report
parent
reply
0 points

Unless there is a huge disclaimer before every interaction saying “THIS SYSTEM OUTPUTS BOLLOCKS!” then it’s not good enough. And any commercial enterprise that represents any AI-generated customer interaction as factual or correct should be held legally accountable for making that claim.

There are probably already cases where AI is being used for life-and-limb decisions, probably with a do-nothing human rubber stamp in the loop to give plausible deniability. People will be maimed and killed by these decisions.

permalink
report
parent
reply
52 points

If these companies are marketing their AI as being able to provide “answers” to your questions they should be liable for any libel they produce.

If they market it as “come have our letter generator give you statistically associated collections of letters to your prompt” then I guess they’re in the clear.

permalink
report
parent
reply
4 points
*

It’s like that aeroplane company who had a chatbot serve answers, and then tried to weasel out of it when the chatbot informed the customer about a refund policy that didn’t actually exist.

If they’re presenting it as an authoritative source of information, then they should be held to the standard they claim.

permalink
report
parent
reply
20 points

I want them to have more warnings and disclaimers than a pack of cigarettes. Make sure the users are very much aware they can’t trust anything it says.

permalink
report
parent
reply
8 points

If we’ve learned any lesson from the internet, it’s that once something exists it never goes away.

Sure, people shouldn’t believe the output of their prompt. But if you’re generating that output, a site can use the API to generate a similar output for a similar request. A bot can generate it and post it to social media.

Yeah, don’t trust the first source you see. But if the search results are slowly being colonized by AI slop, it gets to a point where the signal-to-noise ratio is so poor it stops making sense to only blame the poor discernment of those trying to find the signal.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 18K

    Monthly active users

  • 5.1K

    Posts

  • 94K

    Comments