You are viewing a single thread.
View all comments
54 points

We’ve had definition for AGI for decades. It’s a system that can do any cognitive task as well as a human can or better. Humans are “Generally Intelligent” replicate the same thing artificially and you’ve got AGI.

permalink
report
reply
15 points

So if you give a human and a system 10 tasks and the human completes 3 correctly, 5 incorrectly and 3 it failed to complete altogether… And then you give those 10 tasks to the software and it does 9 correctly and 1 it fails to complete, what does that mean. In general I’d say the tasks need to be defined, as I can give very many tasks to people right now that language models can solve that they can’t, but language models to me aren’t “AGI” in my opinion.

permalink
report
parent
reply
8 points

Agree. And these tasks can’t be tailored to the AI in order for it to have a chance. It needs to drive to work, fix the computers/plumbing/whatever there, earn a decent salary and return with some groceries and cook dinner. Or at least do something comparable to a human. Just wording emails and writing boilerplate computer-code isn’t enough in my eyes. Especially since it even struggles to do that. It’s the “general” that is missing.

permalink
report
parent
reply
4 points

It needs to drive to work, fix the computers/plumbing/whatever there, earn a decent salary and return with some groceries and cook dinner.

This is more about robotics than AGI. A system can be generally intelligent without having a physical body.

permalink
report
parent
reply
2 points

On the same hand… “Fluently translate this email into 10 random and discrete languages” is a task that 99.999% of humans would fail that a language model should be able to hit.

permalink
report
parent
reply
4 points

any cognitive Task. Not “9 out of the 10 you were able to think of right now”.

permalink
report
parent
reply
5 points

Any is very hard to benchmark and is also not how humans are tested.

permalink
report
parent
reply
4 points

Oh yeah!? If I’m so dang smart why am I not generating 100 billion dollars in value?

permalink
report
parent
reply
7 points
*

Its a definition, but not an effective one in the sense that we can test and recognize it. Can we list all cognitive tasks a human can do? To avoid testing a probably infinite list, we should instead understand what are the basic cognitive abilities of humans that compose all other cognitive abilities we have, if thats even possible. Like the equivalent of a turing machine, but for human cognition. The Turing machine is based on a finite list of mechanisms and it is considered as the ultimate computer (in the classical sense of computing, but with potentially infinite memory). But we know too little about whether the limits of the turing machine are also limits of human cognition.

permalink
report
parent
reply
3 points

I wonder if we’ll get something like NP Complete for AGI, as in a set of problems that humans can solve, or that common problems can be simplified down/converted to.

permalink
report
parent
reply
1 point
*

But we know too little about whether the limits of the turing machine are also limits of human cognition.

Erm, no. Humans can manually step interpreters of Turing-complete languages so we’re TC ourselves. There is no more powerful class of computation, we can compute any computable function and our silicon computers can do it as well (given infinite time and scratch space yadayada theoretical wibbles)

The question isn’t “whether”, the answer to that is “yes of course”, the question is first and foremost “what” and then “how”, as in “is it fast and efficient enough”.

permalink
report
parent
reply
1 point
*

No, you misread what I said. Of course humans are at least as powerful as a turing machine, im not questioning that. What is unkonwn is if turing machines are as powerful as human cognition. Who says every brain operation is computable (in the classical sense)? Who is to say the brain doesnt take advantage of some weird physical phenomenon that isnt classically computable?

permalink
report
parent
reply
1 point
*

As with many things, it’s hard to pinpoint the exact moment when narrow AI or pre-AGI transitions into true AGI. However, the definition is clear enough that we can confidently look at something like ChatGPT and say it’s not AGI - nor is it anywhere close. There’s likely a gray area between narrow AI and true AGI where it’s difficult to judge whether what we have qualifies, but once we truly reach AGI, I think it will be undeniable.

I doubt it will remain at “human level” for long. Even if it were no more intelligent than humans, it would still process information millions of times faster, possess near-infinite memory, and have access to all existing information. A system like this would almost certainly be so obviously superintelligent that there would be no question about whether it qualifies as AGI.

I think this is similar to the discussion about when a fetus becomes a person. It may not be possible to pinpoint a specific moment, but we can still look at an embryo and confidently say that it’s not a person, just as we can look at a newborn baby and say that it definitely is. In this analogy, the embryo is ChatGPT, and the baby is AGI.

permalink
report
parent
reply
1 point

So then how do we define natural general intelligence? I’d argue it’s when something can do better than chance at solving a task without prior training data particular to that task. Like if a person plays tetris for the first time, maybe they don’t do very well but they probably do better than a random set of button inputs.

Likewise with AGI - say you feed an LLM text about the rules of tetris but no button presses/actual game data and then hook it up to play the game. Will it do significantly better than chance? My guess is no but it would be interesting to try.

permalink
report
parent
reply
1 point

Any or every task?

permalink
report
parent
reply
4 points

It should be able to perform any cognitive task a human can. We already have AI systems that are better at individual tasks.

permalink
report
parent
reply
0 points
*

That’s kind of too broad, though. It’s too generic of a description.

permalink
report
parent
reply
8 points

The key word here is general friend. We can’t define general anymore narrowly, or it would no longer be general.

permalink
report
parent
reply
6 points

That’s the idea, humans can adapt to a broad range of tasks, so should AGI. Proof of lack of specilization as it were.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 14K

    Monthly active users

  • 6.8K

    Posts

  • 158K

    Comments