These models are nothing more than glorified autocomplete algorithms parroting the responses to questions that already existed in their input.
They’re completely incapable of critical thought or even basic reasoning. They only seem smart because people tend to ask the same stupid questions over and over.
If they receive an input that doesn’t have a strong correlation to their training, they just output whatever bullshit comes close, whether it’s true or not. Which makes them truly dangerous.
And I highly doubt that’ll ever be fixed because the brainrotten corporate middle-manager types that insist on implementing this shit won’t ever want their “state of the art AI chatbot” to answer a customer’s question with “sorry, I don’t know.”
I can’t wait for this stupid AI craze to eat its own tail.
Last I checked (which was a while ago) “AI” still can’t pass the most basic of tasks such as “show me a blank image”/“show me a pure white image”. the LLM will output the most intense fever dream possible but never a simple rectangle filled with #fff coded pixels. I’m willing to debate the potentials of AI again once they manage to do that without those “benchmarks” getting special attention in the training data.
Problem is, AI companies think they could solve all the current problems with LLMs if they just had more data, so they buy or scrape it from everywhere they can.
That’s why you hear every day about yet more and more social media companies penning deals with OpenAI. That, and greed, is why Reddit started charging out the ass for API access and killed off third-party apps, because those same APIs could also be used to easily scrape data for LLMs. Why give that data away for free when you can charge a premium for it? Forcing more users onto the official, ad-monetized apps was just a bonus.
Yep. In cryptography there was a moment when cryptographers realized that the key must be secret, the message should be secret, but the rest of the system can not be secret. For the social purpose of refining said system. EDIT: And that these must be separate entities.
These guys basically use lots of data instead of algorithms. Like buying something with oil money instead of money made on construction.
I just want to see the moment when it all bursts. I’ll be so gleeful. I’ll go and buy an IPA and will laugh in every place in the Internet I’ll see this discussed.
Because it’s not AI, it’s sophisticated pattern separation, recognition, lossy compression and extrapolation systems.
Artificial intelligence, like any intelligence, has goals and priorities. It has positive and negative reinforcements from real inputs.
Their AI will be possible when it’ll be able to want something and decide something, with that moment based on entropy and not extrapolation.
Artificial intelligence, like any intelligence, has goals and priorities
No. Intelligence does not necessitate goals. You are able to understand math, letters, words, meaning of those without pursuing a specific goal.
Because it’s not AI, it’s sophisticated pattern separation, recognition, lossy compression and extrapolation systems.
And our brains work in a similar way.
I’m willing to debate the potentials of AI again once they manage to do that without those “benchmarks” getting special attention in the training data.
You sound like those guys who doomed AI, because a single neuron wasn’t able to solve the XOR problem. Guess what, build a network out of neurons and the problem is solved.
What potentials are you talking about? The potentials are tremendous. There are a plethora of algorithms, theoretic knowledge and practical applications where AI really shines and proves its potential. Just because LLMs currently still lack several capabilities, this doesn’t mean that some future developments can’t improve on that and this by maybe even not being a contemporary LLM. LLMs are just one thing in the wide field of AI. They can do really cool stuff. This points towards further potential in that area. And if it’s not LLMs, then possibly other types of AI architectures.
I generally agree with your comment, but not on this part:
parroting the responses to questions that already existed in their input.
They’re quite capable of following instructions over data where neither the instruction nor the data was anywhere in the training data.
They’re completely incapable of critical thought or even basic reasoning.
Critical thought, generally no. Basic reasoning, that they’re somewhat capable of. And chain of thought amplifies what little is there.
I don’t believe this is quite right. They’re capable of following instructions that aren’t in their data but appear like things which were (that is, it can probabilistically interpolate between what it has seen in training and what you prompted it with — this is why prompting can be so important). Chain of thought is essentially automated prompt engineering; if it’s seen a similar process (eg from an online help forum or study materials) it can emulate that process with different keywords and phrases. The models themselves however are not able to perform a is to b therefore b is to a, arguably the cornerstone of symbolic reasoning. This is in part because it has no state model or true grounding, only probabilities you could observe a token given some context. So even with chain of thought, it is not reasoning, it’s just doing very fancy interpolation of the words and phrases used in the initial prompt to generate a prompt that is probably going to give a better answer, not because of reasoning, but because of a stochastic process.
Synthesis versus generation. Yes.
And I highly doubt that’ll ever be fixed because the brainrotten corporate middle-manager types that insist on implementing this shit won’t ever want their “state of the art AI chatbot” to answer a customer’s question with “sorry, I don’t know.”
It’s a tower of Babel IRL.
The current AI discussion I’m reading online has eerie similarities to the debate about legalizing cannabis 15 years ago. One side praises it as a solution to all of society’s problems, while the other sees it as the devil’s lettuce. Unsurprisingly, both sides were wrong, and the same will probably apply to AI. It’ll likely turn out that the more dispassionate people in the middle, who are neither strongly for nor against it, will be the ones who had the most accurate view on it.
It’ll likely turn out that the more dispassionate people in the middle, who are neither strongly for nor against it, will be the ones who had the most accurate view on it.
I believe that some of the people in the middle will have more accurate views on the subject, indeed. However, note that there are multiple ways to be in the “middle ground”, and some are sillier than the extremes.
For example, consider the following views:
- That LLMs are genuinely intelligent, but useless.
- That LLMs are dumb, but useful.
Both positions are middle grounds - and yet they can’t be accurate at the same time.
Of course they don’t, logical reasoning isn’t just guessing a word or phrase that comes next.
As much as some of these tech bros want human thinking and creativity to be reducible to mere pattern recognition, it isn’t, and it never will be.
But the corpos and Capitalists don’t care, because their whole worldview is based in the idea that humans are only as valuable as the profitability they generate for a company.
They don’t see any value in poetry, or philosophy, or literature, or historical analysis, or visual arts unless it can be patented, trademarked, copyrighted, and sold to consumers at a good markup.
As if the only difference between Van Goh’s art and an LLM is the size of sample data and efficiency of an algorithm.
You don’t have to get all philosophical, since the value art is almost by definition debatable.
These models can’t do basic logic. They already fail at this. And that’s actually relevant to corpos if you can suddenly convince a chatbot to reduce your bill by 60% because bears don’t eat mangos or some other nonsensical statement.
It’s all connected, the reasons why it can’t do basic logical reasoning are the same for why it can’t replace human art.
It’s because neither of those activities are mere pattern recognition and statistical inference, which is all LLMs will ever be.
LLMs and image generating models are completely different things. Outputting an image doesnt require or benefit from reason and logic (other than making the model “understand” the prompt). Drawing a three headed monkey isnt “logical” and doesnt follow “reason” but that’s ok because art isnt about making photorealisitic images.
AI images could totally be useful as a tool in art. “But a computer made it! It’s not art!” It’s the same tired argument we heard about electronic music before.
But the fediverse seems to have such a hate boner for ANYTHING associated with AI (dont get me wrong, there is lots to hate. Mostly with tech-bro grifting…) that people are unable to see that these can be useful complements to human creativity.
Here’s another example… People crying that when an image contains AI generated elements, or maybe a video game contains some AI assets. People fly into a rage and want to dismiss the ENTIRE work and throw it all out. Human art doesnt require 100% human hands to make. Go look at any famous painting by a renaissance master. Did you know a lot of these guys had whole workshops of lackeys filling in background details for them? Are we going to throw out all the raphael and rembrandt paintings because they had assistance from other uncredited people?
Same with AI. Why cant an artist spend MORE time on important details and let AI draw some happy little trees in the background?
I’m just thinking - 12 years ago there was a lot of talk of politicians and big corpo chiefs being replaceable with a shell script. As both a joke and an argument in favor of something requiring change.
One can say it was saying that these people are not needed - engineers can build their replacements.
In some sense AI is politicians and big bosses trying to build a replacement for engineers, using means available to these people.
Maybe they noticed, got pissed and are trying to enact revenge. Sort of a domain area war.
I keep thinking of the anticapitalist manifesto that a spinoff team from the disco elysium developers dropped, and this part in particular stands out to me and helps crystallize exactly why I don’t like AI art:
All art is communication — dialogue across time, space and thought. In its rawest, it is one mind’s ability to provoke emotion in another. Large language models — simulacra, cold comfort, real-doll pocket-pussy, cyberspace freezer of an abandoned IM-chat — which are today passed off for “artificial intelligence”, will never be able to offer a dialogue with the vision of another human being.
Machine-generated works will never satisfy or substitute the human desire for art, as our desire for art is in its core a desire for communication with another, with a talent who speaks to us across worlds and ages to remind us of our all-encompassing human universality. There is no one to connect to in a large language model. The phone line is open but there’s no one on the other side.
I work for a consulting company and they’re truly going off the deep end pushing consultants to sell this miracle solution. They are now doing weekly product demos and all of them are absolutely useless hype grifts. It’s maddening.
Apple’s study proves that LLM-based AI models are flawed because they cannot reason
This really isn’t a good title, I think. It was understood that LLM-based models don’t reason, not on their own.
A better one would be that researchers at Apple proposed a metric that better accounts for reasoning capability, a better sort of “score” for an AI’s capability.
I still think it’s better to refer to LLMs as “stochastic lexical indexes” than AI
AI in general is a shitty term. It’s mostly PR. The Term “Intelligence” is very fuzzy and difficult to define - especially for people who are not in the field of machine learning.
No it’s not, that’s why some smart people are starring by defining a more interesting concept: educability.