41 points

The hallucinations will continue until the training data is absolutely perfect

permalink
report
reply
39 points

That’s not correct btw. AI is supposed to be creative and come up with new text/images/ideas. Even with perfect training data. That creativity means creativity. We want it to come up with new text out of thin air. And perfect training data is not going to change anything about it. We’d need to remove the ability to generate fictional stories and lots of other answers, too. Or come up with an entirely different approach.

permalink
report
parent
reply
8 points

AI isn’t supposed to be creative, it’s isn’t even capable of that. It’s meant to min/max it’s evaluation criterion against a test dataset

It does this by regurgitating the training data associated with a given input as closely as possible

permalink
report
parent
reply
-2 points
*

I’ve heard people saying that before. But it’s not true. You can ask an AI to draw you an astronaut on a horse and it’ll do it despite never having seen such picture. (Now it has.) Same applies to LLMs. They come up with an answer to your exact question. Not a similar one it saw on Reddit before. That answer might be wrong (which is my point) but if you try it, you’ll regularly find it tries answering your questions and not different ones.

I’ve also tried some scifi storywriting with AI and there it becomes quite obvious that it’s able to apply things it knows from different contexts and apply that to my setting. Like ethics questions, basic physics and what character can and cannot do. Rough knowledge about how stories are written. You can tell it to do a plot twist an an arbitrary point and it’ll do. All of that is knowledge about (abstract) concepts and the ability to apply it to different contexts. Which is an important part of creativity.

And I’ve read papers where the scientists try to look inside of AI and they are able to spot abstract concepts like what a cat is in the weights. It’s fascinating how it works. And it turns out it’s not just regurgitating it’s training data. Which isn’t surprising because a lot of effort has been put into the computer science behind it to make AI more than that. And it’s also why they’re useful in the first place.

permalink
report
parent
reply
13 points

In order to get perfect training data, they cannot use any human output.

I’m afraid it is not going to happen anytime soon :)

permalink
report
parent
reply
3 points

What other output do you propose?

permalink
report
parent
reply
5 points

What other output do you propose?

I do not propose, and it is not neccessarily any output.

Their first question is, what do they want the AI to do. And if they want it to be perfect, then they need to use perfect training data, not human output.

permalink
report
parent
reply
2 points

I’ve started editing my answers/questions on StackExchange. Few characters at a time. I’m doing my part.

permalink
report
parent
reply
3 points

Won’t that also make things worse for people looking for answers?

permalink
report
parent
reply
1 point

Are you improving it, or do you create new errors? ;-)

permalink
report
parent
reply
9 points

Most improvements in machine learning has been made by increasing the data (and by using models that can generalize larger data better).

Perfect data isn’t needed as the errors will “even out”. Although now there’s the problem that most new content on the Internet is low quality AI garbage.

permalink
report
parent
reply
1 point
*

Perfect data isn’t needed as the errors will “even out”.

That is an assumption.

I do not think that it is a correct assumption.

now there’s the problem that most new content on the Internet is low quality AI garbage.

This reminds me about a recommendation from some philosopher - I forgot who it was - he said that you should read only such books that are at least 100 years old.

permalink
report
parent
reply
3 points

I’m extrapolating from history.

15 years ago people made fun of AI models because they could mistake some detail in a bush for a dog. Over time the models became more resistant against those kinds of errors. The change was more data and better models.

It’s the same type of error as hallucination. The model is overly confident about a thing it’s wrong about. I don’t see why these types of errors would be any different.

permalink
report
parent
reply
4 points

Hallucinations are an unavoidable part of LLMs, and are just as present in the human mind. Training data isn’t the issue. The issue is that the design of the systems that leverage LLMs uses them to do more than they should be doing.

I don’t think that anything short of being able to validate an LLM’s output without running it through another LLM will be able to fully prevent hallucinations.

permalink
report
parent
reply
11 points

And yet people still use those bullshit generators and call their bullshit „hallucinations”.

We broke the internet for this.

permalink
report
reply
1 point

The internet was broke before.

permalink
report
parent
reply
7 points

me when paywalls

permalink
report
reply
5 points
1 point

👍

permalink
report
parent
reply
1 point
*
Deleted by creator
permalink
report
reply
2 points

what does a web browser have to do with a search engine?

permalink
report
parent
reply
3 points

Edge comes pre-enabled with a ton of microsoft’s crappy AI - Bing chat, copilot, etc.

permalink
report
parent
reply
0 points
*
Deleted by creator
permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 18K

    Monthly active users

  • 5.2K

    Posts

  • 101K

    Comments