When German journalist Martin Bernklautyped his name and location into Microsoft’s Copilot to see how his articles would be picked up by the chatbot, the answers horrified him. Copilot’s results asserted that Bernklau was an escapee from a psychiatric institution, a convicted child abuser, and a conman preying on widowers. For years, Bernklau had served as a courts reporter and the AI chatbot had falsely blamed him for the crimes whose trials he had covered.

The accusations against Bernklau weren’t true, of course, and are examples of generative AI’s “hallucinations.” These are inaccurate or nonsensical responses to a prompt provided by the user, and they’re alarmingly common. Anyone attempting to use AI should always proceed with great caution, because information from such systems needs validation and verification by humans before it can be trusted.

But why did Copilot hallucinate these terrible and false accusations?

You are viewing a single thread.
View all comments View context
9 points

Yeah that implies that the other network(s) can tell right from wrong. Which they can’t. Because if they did the problem wouldn’t need solving.

permalink
report
parent
reply
-6 points

What other networks?

It currently recognizes when it is told it is wrong: it is told to apologize to it’s conversation partner and to provide a different response. It doesn’t need another network to tell it right from wrong. It needs access to the previous sessions where humans gave it that information.

permalink
report
parent
reply
4 points

here’s that same conversation with a human:

“why is X?” “because y!” “you’re wrong” “then why the hell did you ask me for if you already know the answer?”

What you’re describing will train the network to get the wrong answer and then apologize better. It won’t train it to get the right answer

permalink
report
parent
reply
-2 points

I can see why you would think that, but to see how it actually goes with a human, look at the interaction between a parent and child, or a teacher and student.

“Johnny, what’s 2+2?”

“5?”

“No, Johnny, try again.”

“Oh, it’s 4.”

Turning Johnny into an LLM,nThe next time someone asks, he might not remember 4, but he does remember that “5” consistently gets him a “that’s wrong” response. So does “3”.

But the only way he knows 5 and 3 gets a negative reaction is by training on his own data, learning from his own mistakes.

He becomes a better and better mimic, which gets him up to about a 5th grade level of intelligence instead of a toddler.

permalink
report
parent
reply
5 points
*

Have you tried doing this? I have, for *nearly a year, on the more ‘advanced’ pro versions. Yes, it will apologise and try again – and it gets progressively worse over time. There’s been a marked degradation as it progresses, and all the models are worse now at maintaining context and not hallucinating than they were several months ago.

LLMs aren’t the kind of AI that can evaluate themselves and improve like you’re suggesting. Their logic just doesn’t work like that. A true AI will come from an entirely different type of model, not from LLMs.

e: time. Wow, where did this year go?

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 18K

    Monthly active users

  • 5.2K

    Posts

  • 96K

    Comments