You know how Google’s new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won’t slide off (pssst…please don’t do this.)
Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these “hallucinations” are an “inherent feature” of AI large language models (LLM), which is what drives AI Overviews, and this feature “is still an unsolved problem.”
Are they now AI, large language models or AI large language models?
This is what happens every time society goes along with tech bro hype. They just run directly into a wall. They are the embodiment of “Didn’t stop to think if they should” and it’s going to cause a lot of problems for humanity.
I think we should stop calling things AI unless they actually have their own intelligence independent of human knowledge and training.
TBH this is surprisingly honest.
There’s really nothing they can do, that’s just the current state of LLMs. People are insane, they can literally talk with something that isn’t human. We are literally the first humans in human experience to have a human-level conversation with something that isn’t human… And they don’t like it because it isn’t perfect 4 years after release.
Using fancy predictive text is not like talking to a human level intelligence.
You’ve bought into the fad.
In terms of language, Chatgpt is more advanced than most humans. Have you spoken to the average person lately? By average I mean worldwide average.
It’s obviously not full human intelligence, but in terms of language it is pretty mind blowing.
these hallucinations are an “inherent feature” of AI large language models (LLM), which is what drives AI Overviews, and this feature "is still an unsolved problem”.
Then what made you think it’s a good idea to include that in your product now?!