cross-posted from: https://lemmy.ml/post/20858435
Will AI soon surpass the human brain? If you ask employees at OpenAI, Google DeepMind and other large tech companies, it is inevitable. However, researchers at Radboud University and other institutes show new proof that those claims are overblown and unlikely to ever come to fruition. Their findings are published in Computational Brain & Behavior today.
What they didnāt prove, at least by my reading of this paper, is that achieving general intelligence itself is an NP-hard problem. Itās just that this particular method of inferential training, what they call āAI-by-Learning,ā is an NP-hard computational problem.
This is exactly what theyāve proven. They found that if you can solve AI-by-Learning in polynomial time, you can also solve random-vs-chance (or whatever it was called) in a tractable time, which is a known NP-Hard problem. Ergo, the current learning techniques which are tractable will never result in AGI, and any technique that could must necessarily be considerably slower (otherwise you can use the exact same proof presented in the paper again).
They merely mentioned these methods to show that it doesnāt matter which method you pick. The explicit point is to show that it doesnāt matter if you use LLMs or RNNs or whatever; it will never be able to turn into a true AGI. It could be a good AI of course, but that G is pretty important here.
But itās easy to just define general intelligence as something approximating what humans already do.
No, General Intelligence has a set definition that the paperās authors stick with. Itās not as simple as āitās a human-like intelligenceā or something that merely approximates it.
This isnāt my field, and some undergraduate philosophy classes I took more than 20 years ago might not be leaving me well equipped to understand this paper. So Iāll admit Iām probably out of my element, and want to understand.
That being said, Iām not reading this paper with your interpretation.
This is exactly what theyāve proven. They found that if you can solve AI-by-Learning in polynomial time, you can also solve random-vs-chance (or whatever it was called) in a tractable time, which is a known NP-Hard problem. Ergo, the current learning techniques which are tractable will never result in AGI, and any technique that could must necessarily be considerably slower (otherwise you can use the exact same proof presented in the paper again).
But theyāve defined the AI-by-Learning problem in a specific way (hereās the informal definition):
Given: A way of sampling from a distribution D.
Task: Find an algorithm A (i.e., āan AIā) that, when run for different possible situations as input, outputs behaviours that are human-like (i.e., approximately like D for some meaning of āapproximateā).
I read this definition of the problem to be defined by needing to sample from D, that is, to ālearn.ā
The explicit point is to show that it doesnāt matter if you use LLMs or RNNs or whatever; it will never be able to turn into a true AGI
But the caveat Iām reading, implicit in the paperās definition of the AI-by-Learning problem, is that itās about an entire class of methods, of learning from a perfect sample of intelligent outputs to itself be able to mimic intelligent outputs.
General Intelligence has a set definition that the paperās authors stick with. Itās not as simple as āitās a human-like intelligenceā or something that merely approximates it.
The paper defines it:
Specifically, in our formalisation of AI-by-Learning, we will make the simplifying assumption that there is a finite set of possible behaviours and that for each situation s there is a fixed number of behaviours Bs that humans may display in situation s.
Itās just defining an approximation of human behavior, and saying that achieving that formalized approximation is intractable, using inferences from training data. So Iām still seeing the definition of human-like behavior, which would by definition be satisfied by human behavior. So thatās the circular reasoning here, and whether human behavior fits another definition of AGI doesnāt actually affect the proof here. Theyāre proving that learning to be human-like is intractable, not that achieving AGI is itself intractable.
I think itās an important distinction, if Iām reading it correctly. But if Iām not, Iām also happy to be proven wrong.