When German journalist Martin Bernklautyped his name and location into Microsoft’s Copilot to see how his articles would be picked up by the chatbot, the answers horrified him. Copilot’s results asserted that Bernklau was an escapee from a psychiatric institution, a convicted child abuser, and a conman preying on widowers. For years, Bernklau had served as a courts reporter and the AI chatbot had falsely blamed him for the crimes whose trials he had covered.

The accusations against Bernklau weren’t true, of course, and are examples of generative AI’s “hallucinations.” These are inaccurate or nonsensical responses to a prompt provided by the user, and they’re alarmingly common. Anyone attempting to use AI should always proceed with great caution, because information from such systems needs validation and verification by humans before it can be trusted.

But why did Copilot hallucinate these terrible and false accusations?

You are viewing a single thread.
View all comments View context
-23 points

It’s a solveable problem. AI is currently at a stage of development equivalent to a 2-year-old, just with better grammar. Everything it is doing now is mimicry and babbling.

It needs to feed it’s own interactions right back into it’s training data. To become a better and better mimic. Eventually, the mechanism it uses to select the appropriate data to form a response will become more and more sophisticated, and it will hallucinate less and less. Eventually, it’s hallucinations will be seen as “insightful” rather than wild ass guesses.

permalink
report
parent
reply
23 points

also, what you described has already been studied. Training an llm its own output completely destroys it, not makes it better.

permalink
report
parent
reply
-12 points

This is incorrect or perhaps updated. Generating new data, using a different AI method to tag that data, and then training on that data is definitely a thing.

permalink
report
parent
reply
14 points
*

yes it is, and it doesn’t work.

edit: too expand, if you’re generating data it’s an estimation. The network will learn the same biases and make the same mistakes and assumtlptions you did when enerating the data. Also, outliers won’t be in the set (because you didn’t know about them, so the network never sees any)

permalink
report
parent
reply
16 points
*

The outputs of the nn are sampled using a random process. Probability distribution is decided by the llm, loaded die comes after the llm. No, it’s not solvable. Not with LLMs. not now, not ever.

permalink
report
parent
reply
-20 points

Good luck being pro AI here. Regardless of the fact that they could just put a post on the prompt that says The writer of this document was not responsible for the act they are just writing about it and it would not frame them as the perpetrator.

permalink
report
parent
reply
16 points

If you already know the answer you can tell the AI the answer as part of the question and it’ll give you the right answer.

That’s what you sound like.

AI people are as annoying as the Musk crowd.

permalink
report
parent
reply
1 point

I’m no AI fanboy, but what you just described was the feedback cycle during training.

permalink
report
parent
reply
-18 points

How helpful of you to tell me what I’m saying, especially when you reframe my argument to support yourself.

That’s not what I said. Why would you even think that’s what I said.

Before you start telling me what I sound like, you should probably try to stop sounding like an impetuous child.

Every other post from you is dude or LMAO. How do you expect anyone to take anything you post seriously?

permalink
report
parent
reply
-20 points

You know what, don’t bother responding back to me I’m just blocking you now, before you decide to drag some more of that tired right wing bullshit that you used to fight with everyone else with, none of your arguments on here are worth anyone even reading so I’m not going to waste my time and responding to anything or reading anything from you ever again.

permalink
report
parent
reply
14 points

the problem isn’t being pro ai. It’s people puling ai supposed ai capabilities out of their asses without having actually looked at a single line of code. This is obvious to anyone who has coded a neural network. Yes even to openai themselves, but if they let you believe that, then the money stops flowing. You simply can’t get an 8-ball to give the correct answer consistently. Because it’s fundamentally random.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 18K

    Monthly active users

  • 5.2K

    Posts

  • 100K

    Comments