Meta “programmed it to simply not answer questions,” but it did anyway.

120 points

Hallucinating is a fancy term for BEING WRONG.

Unreliable bullshit generator is still unreliable. Imagine that!

permalink
report
reply
1 point

That’s like saying car crash is just a fancy word for accident, or cat is just a fancy term for animal.

Hallucination is a technical term for this type of AI, and it’s inherent to how it works at it’s core.

And now I’ll let you get back to your hating.

permalink
report
parent
reply
-1 points

The funny thing is we hallucinate all our answers too. I don’t know where these words are coming from and I am not reasoning about them other than construction of a grammatically correct sentence. Why did I type this? I don’t have a fucking clue. 😂

We map our meanings onto whatever words we see fit. If I had a dollar for every time I’ve heard a Republican call Obama a Marxist still blows my mind.

Thank you for saying something too. Better than I could do. I’ve been thinking about AI since I was a little kid. I’ve watched it go from at best some heuristic pathfinding in video games all the way to what we have now. Most people just weren’t ever paying attention. It’s been incredible to see that any of this was even possible.

I watched Two Minute Papers from back when he was mostly doing light transport simulation (raytracing). It’s incredible where we are, but baffling people can’t see the tech as separate from good old capitalism and the owner class. It just so happens it takes a fuckton of money to build stuff like this, especially at first. This is super early.

permalink
report
parent
reply
9 points

Hallucination is also wildly misleading. The AI does not believe something that isn’t real, it was incorrect in the words it guessed would be appropriate.

permalink
report
parent
reply
51 points
*

AI doesn’t know what’s wrong or correct. It hallucinates every answer. It’s up to the supervisor to determine whether it’s wrong or correct.

Mathematically verifying the correctness of these algorithms is a hard problem. It’s intentional and the trade-off for the incredible efficiency.

Besides, it can only “know” what it has been trained on. It shouldn’t be suprising that it cannot answer about the Trump shooting. Anyone who thinks otherwise simply doesn’t know how to use these models.

permalink
report
parent
reply
13 points

It is impossible to mathematically determine if something is correct. Literally impossible.

At best the most popular answer, even if it is narrowed down to reliable sources, is what it can spit out. Even that isn’t the same thing is consensus, because AI is not intelligent.

If the ‘supervisor’ has to determine if it is right and wrong, what is the point of AI as a source of knowledge?

permalink
report
parent
reply
2 points
*

It is impossible to mathematically determine if something is correct. Literally impossible.

No, you’re wrong. You can indeed prove the correctness of a neural network. You can also prove the correctness of many things. It’s the most integral part of mathematics and computer-science.

For example a very simple proof: with the conjecture that an even number is 2k of a number k, then you can prove that the addition of two even numbers is again an even number (and that prove is definite): 2a+2b=2(a+b), since a+b=k for some k.

Obviously, proving more complex mathematical problems like AI is more involved. But that’s why we have scientists that work on that.

At best the most popular answer, even if it is narrowed down to reliable sources, is what it can spit out. Even that isn’t the same thing is consensus, because AI is not intelligent.

That is correct. But it’s not a limitation. It’s by design. It’s the tradeoff for the efficiency of the models. It’s like lossy JPG compression. You accept some artifacts but in return you get much smaller images and much faster loading times.

But there are indeed "AI"s and neural networks that have been proven correct. This is mostly applied to safety critical applications like airplane collision avoidance systems or DAS. But a language model is not safety critical; so we take full advantage.

If the ‘supervisor’ has to determine if it is right and wrong, what is the point of AI as a source of knowledge?

You’re completely misunderstanding the whole thing. The only reason why it’s so incredibly good in many applications is because it’s bad in others. It’s intentionally designed that way. There are exact algorithms and there approximation algorithms. The latter tend to be much more efficient and usable in practice.

permalink
report
parent
reply
0 points

That is, unless you define correct in mathematical terms. Which no one has done yet.

permalink
report
parent
reply
-3 points

We should understand that 99.9% of what wee say and think and believe is what feels good to us and we then rationalize using very faulty reasoning, and that’s only when really challenged! You know how I came up with these words? I hallucinated them. It’s just a guided hallucination. People with certain mental illnesses are less guided by their senses. We aren’t magic and I don’t get why it is so hard for humans to accept how any individual is nearly useless for figuring anything out. We have to work as agents too, so why do we expect an early days LLM to be perfect? It’s so odd to me. Computer is trying to understand our made up bullshit. A logic machine trying to comprehend bullshit. It is amazing it even appears to understand anything at all.

permalink
report
parent
reply
0 points

Uhm. Have you ever talked to a human being.

permalink
report
parent
reply
1 point
*

Human beings are not infallible either.

permalink
report
parent
reply
8 points

Is it wrong to root this on simply because I hate that shitbag?

permalink
report
reply
4 points

Hatred is a path to the dark side.

As evidenced by you now rooting for misinformation.

permalink
report
parent
reply
1 point
*

Oh I’m far to pragmatic to believe that. If truth isn’t working, then what choice do you really have?

permalink
report
parent
reply
13 points

maybe Meta AI is into something

permalink
report
reply
53 points

Kaplan noted that AI chatbots “are not always reliable when it comes to breaking news or returning information in real time,” because “the responses generated by large language models that power these chatbots are based on the data on which they were trained, which can at times understandably create some issues when AI is asked about rapidly developing real-time topics that occur after they were trained.”

If you’re expecting a glorified autocomplete to know about things it doesn’t have in its training data, you’re an idiot.

permalink
report
reply
3 points

Yeah, the average person is the idiot here, for something they never asked for, and for something they see no value in. Companies threw billions of dollars at this emerging technology. Many things like Google Search have hallucinating, error-prone AI forced into the main product that is impossible to opt-out or use the (working) legacy version now…

permalink
report
parent
reply
-2 points
*

Nobody is forcing you to use it.

I’m using it and I see great value in it. And if there are people that see value in a product then it’s worth the investment.

permalink
report
parent
reply
5 points

Yes, people are being forced to use it if they want to, for instance, search using Google or Bing.

As the parent comment suggested, or there’s no way to opt out, currently.

I’m glad you see value in it; I think the injection of LLM queries into search results I want to contain accurate results (and nothing more) a useless waste of power.

permalink
report
parent
reply
1 point

I always ask all people defending AI, or rather LLMs, what’s the great value they all mention in their comments. So far the “best” answer I got was one dude using LLMs to extract info from decades old reports that no one has checked in 20 years hahaha. So glad we are allowing LLMs to deetroy the environment and plagiarize all creative work for that lol.

So, what is the great value you see man?

permalink
report
parent
reply
3 points

Some services will use glorified RAG to put more current info in the context.

But yeah, if it’s just the raw model, I’m not sure what they were expecting.

permalink
report
parent
reply
1 point

Sir, are you telling me AI isn’t a panacea for conveying facts? /s

permalink
report
parent
reply
37 points

There are definitely idiots, but these idiots don’t get their ideas of how the world works out of thin air. These AI chatbot companies push the cartoon reality that this is a smart robot that knows things hard in their advertisements, and to learn otherwise you have to either listen to smart people or read a lot of text.

permalink
report
parent
reply
1 point

I just assumed that its bs at first, but I also once nearly went unga bunga caveman against a computer from 1978. So I probably have a deeper understanding of how dumb computers can be.

permalink
report
parent
reply
-2 points

Does the AI consistently say that, no matter who asks?

Because if so, that’s not a hallucination.

permalink
report
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 17K

    Monthly active users

  • 6.1K

    Posts

  • 132K

    Comments