That’s why they’re calling it “AI”.
That’s not why. They’re calling it AI because it is AI. AI doesn’t mean sapient or conscious.
Edit: look at this diagram if you’re still unsure:
What is this nonsense Euler diagram? Emotion can intersect with consciousness, but emotion is also a subset of consciousness but emotion also never contains emotion? Intelligence does overlap at all with sentience, sapience, or emotion? Intelligence isn’t related at all to thought, knowledge, or judgement?
Did AI generate this?
https://www.mdpi.com/2079-8954/10/6/254
What is this nonsense Euler diagram?
Science.
Did AI generate this?
Scientists did.
Not everything you see in a paper is automatically science, and not every person involved is a scientist.
That picture is a diagram, not science. It was made by a writer, specifically a columnist for Medium.com, not a scientist. It was cited by a professor who, by looking at his bio, was probably not a scientist. You would know this if you followed the citation trail of the article you posted.
You’re citing an image from a pop culture blog and are calling it science, which suggests you don’t actually know what you’re posting, you just found some diagram that you thought looked good despite some pretty glaring flaws and are repeatedly posting it as if it’s gospel.
In the general population it does. Most people are not using an academic definition of AI, they are using a definition formed from popular science fiction.
You have that backwards. People are using the colloquial definition of AI.
“Intelligence” is defined by a group of things like pattern recognition, ability to use tools, problem solving, etc. If one of those definitions are met then the thing in question can be said to have intelligence.
A flat worm has intelligence, just very little of it. An object detection model has intelligence (pattern recognition) just not a lot of it. An LLM has more intelligence than a basic object detection model, but still far less than a human.
The I implies intelligence; of which there is none because it’s not sentient. It’s intentionally deceptive because it’s used as a marketing buzzword.
You might want to look up the definition of intelligence then.
By literal definition, a flat worm has intelligence. It just didn’t have much of it. You’re using the colloquial definition of intelligence, which uses human intelligence as a baseline.
I’ll leave this graphic here to help you visualize what I mean:
Please do post this graphic again, I don’t think I’ve quite grasped it yet
Oh, yes. I forgot that LLM have creativity, abstract thinking and understanding. Thanks for the reminder. /s
I’m not gonna lie, most people like you are afraid to entertain the idea of AI being conscious because it makes you look at your own consciousness as not being all that special or unique.
Do you believe in spirits, souls, or god genes?
No, it’s because it isn’t conscious. An LLM is a static model (all our AI models are in fact). For something to be conscious or sapient it would require a neural net that can morph and adapt in real-time. Nothing currently can do that. Training and inference are completely separate modes. A real AGI would have to have the training and inference steps occurring at once and continuously.
That’s fine, but I was referring to AI as a concept and not just its current iteration or implementation.
I agree that it’s not conscious now, but someday it could be.
Lol.
Average people these days are just so… average. Glad I’m not like you people anymore.