An alarming number of people can’t read over a 6th grade level.
The LLM peddlers seem to be going for that exact result. That’s why they’re calling it “AI”. Why is this surprising that non-technical people are falling for it?
That’s why they’re calling it “AI”.
That’s not why. They’re calling it AI because it is AI. AI doesn’t mean sapient or conscious.
Edit: look at this diagram if you’re still unsure:
What is this nonsense Euler diagram? Emotion can intersect with consciousness, but emotion is also a subset of consciousness but emotion also never contains emotion? Intelligence does overlap at all with sentience, sapience, or emotion? Intelligence isn’t related at all to thought, knowledge, or judgement?
Did AI generate this?
https://www.mdpi.com/2079-8954/10/6/254
What is this nonsense Euler diagram?
Science.
Did AI generate this?
Scientists did.
In the general population it does. Most people are not using an academic definition of AI, they are using a definition formed from popular science fiction.
You have that backwards. People are using the colloquial definition of AI.
“Intelligence” is defined by a group of things like pattern recognition, ability to use tools, problem solving, etc. If one of those definitions are met then the thing in question can be said to have intelligence.
A flat worm has intelligence, just very little of it. An object detection model has intelligence (pattern recognition) just not a lot of it. An LLM has more intelligence than a basic object detection model, but still far less than a human.
The I implies intelligence; of which there is none because it’s not sentient. It’s intentionally deceptive because it’s used as a marketing buzzword.
You might want to look up the definition of intelligence then.
By literal definition, a flat worm has intelligence. It just didn’t have much of it. You’re using the colloquial definition of intelligence, which uses human intelligence as a baseline.
I’ll leave this graphic here to help you visualize what I mean:
I’m not gonna lie, most people like you are afraid to entertain the idea of AI being conscious because it makes you look at your own consciousness as not being all that special or unique.
Do you believe in spirits, souls, or god genes?
No, it’s because it isn’t conscious. An LLM is a static model (all our AI models are in fact). For something to be conscious or sapient it would require a neural net that can morph and adapt in real-time. Nothing currently can do that. Training and inference are completely separate modes. A real AGI would have to have the training and inference steps occurring at once and continuously.
I think an alarming number of Gen Z internet folks find it funny to skew the results of anonymous surveys.
Yeah, what is it with GenZ? Millenials would never skew the results of anonymous surveys
Lots of attacks on Gen Z here, some points valid about the education that they were given from the older generations (yet it’s their fault somehow). Good thing none of the other generations are being fooled by AI marketing tactics, right?
The debate on consciousness is one we should be having, even if LLMs themselves aren’t really there. If you’re new to the discussion, look up AI safety and the alignment problem. Then realize that while people think it’s about preparing for a true AGI with something akin to consciousness and the dangers that we could face, we have have alignment problems without an artificial intelligence. If we think a machine (or even a person) is doing things because of the same reasons we want them done, and they aren’t but we can’t tell that, that’s an alignment problem. Everything’s fine until they follow their goals and the goals suddenly line up differently than ours. And the dilemma is - there’s not any good solutions.
But back to the topic. All this is not the fault of Gen Z. We built this world the way it is and raised them to be gullible and dependent on technology. Using them as a scapegoat (those dumb kids) is ignoring our own failures.
Not the fault of prior generations, either. They were raised by their parents, and them by their parents, and so on.
Sometime way back there was a primordial multicellular life form that should have known better.
The main point here (which I think is valid despite my status as a not in this group Gen Z) is that we’re still like really young? I’m 20 dude, it’s just not my or my friends fault that school failed us. The fact it failed us was by design and despite my own and others complaints it’s continued to fail the next generation and alpha is already, very clearly struggling. I really just don’t think there’s much ground to argue about how Gen Z by and large should somehow know better. The whole point of the public education system is to ensure we well educate our children, it’s simply not my or any child’s fault that school is failing to do so. Now that I’m an adult I can, and I do push for improved education but clearly people like me don’t have our priorities straight seeing who got elected…
Tbh, I’m in my 40ies and I don’t think my education was so much better than what younger generations are getting. I’m a software engineer and most of the skills I need now are not skills I learned in school or even university.
I started learning programming when I was 9 because my father gave me his old Apple II computer to see what I would do. At the time, this was a privilege. Most children did not get that kind of early exposure. It also made me learn some English early.
In high school, we eventually had some basic programming classes. I was the guy the teacher asked when something didn’t work. Or when he forgot where the semicolons go in Pascal. During one year, instead of programming, there was a pilot project where we’d learn about computer aided math using Waterloo Maple that just barely ran on our old 486es. That course was great but after two months the teacher ran out of things to teach us because the math became “too advanced for us”.
And yes the internet existed at the time; I had access to it at home starting 1994. We learned nothing about it in school.
When I first went to university I had an Apple PowerBook that I bought from money I earned myself. Even though I worked for it, this was privilege too; most kids couldn’t afford what was a very expensive laptop then, or any laptop. But the reason I’m bringing it up is that my university’s web site at the time did not work on it. They had managed to implement even simple buttons that could have been links as Java applets that only worked on Windows. Those were the people I was supposed to learn computer science from. Which, by the way, at the time still meant “math with a side of computer science”. My generation literally could not study in an IT related field if we couldn’t understand university major level math (this changed quickly in the following years in my country, but still).
So while I don’t disagree about education having a lot of room for optimization, when it comes to more recent technologies like AI, it also makes me a bit salty when all of the blame is assigned to education. The generations currently in education, at least in developed countries, have access to so much that my generation only had when our parents were rich or at least nerds like my father (he was a teacher, so we were not rich). And yet, sometimes it feels like they just aren’t interested in doing anything with this access. At least compared to what I would have done with it.
At the same time, also keep in mind that when you say things like education doesn’t prepare us for AI or whatever other new thing (it used to be just the internet, or “new media”, before), those who you are expecting the education from are the people of my generation, who did not grow up with any of this, and who were not taught about any of it when we were young. We don’t have those answers either… this stuff is new for everyone. And for the people you expect to teach, it’s way more alien than it is for you. This was true when I went to school too, and I think it’s inevitable in a world moving this fast.
Failure often comes at multiple points, it doesn’t just fail at one. It’s a failure of education, of social pressures, of lack of positive environments, and yes of choice. The problem with free will is that you have the chance to choose wrong. You can blame everyone in the world, but if you don’t take accountability for your own actions and choices, nothing will change.
There has never been a time with as much access to information as now. While there as much, likely more, misinformation… that does not mean individuals have no culpability for their own lack of knowledge or understanding.
That doesn’t mean it’s exclusively their fault, or even anywhere near a majority. But that does not mean they lose all free will for their own actions. It does not mean they have no ability to be better.
Should we place the weight of the world on their shoulders? Absolutely not, that is liable to break them. But we also shouldn’t hide them from the burden of their own free will. That only weakens them.
This is an angle I’ve never considered before, with regards to a future dystopia with a corrupt AI running the show. AI might never advance beyond what it is in 2025, but because people believe it’s a supergodbrain, we start putting way too much faith in its flawed output, and it’s our own credulity that dismantles civilisation rather than a runaway LLM with designs of its own. Misinformation unwittingly codified and sanctified by ourselves via ChatGeppetto.
The call is coming from inside the house mechanical Turk!
That’s the intended effect. People with real power think this way: “where it does work, it’ll work and not bother us with too much initiative and change, and where it doesn’t work, we know exactly what to do, so everything is covered”. Checks and balances and feedbacks and overrides and fallbacks be damned.
Humans are apes. When an ape gets to rule an empire, it remains an ape and the power kills its ability to judge.
I mean, it’s like none of you people ever consider how often humans are wrong when criticizing AI.
How often have you looked for information from humans and have been fed falsehoods as though they were true? It happens so much we’ve just gotten used to filtering out the vast majority of human responses because most of them are incorrect or unrelated to the subject.