cross-posted from: https://lemmy.ml/post/20858435

Will AI soon surpass the human brain? If you ask employees at OpenAI, Google DeepMind and other large tech companies, it is inevitable. However, researchers at Radboud University and other institutes show new proof that those claims are overblown and unlikely to ever come to fruition. Their findings are published in Computational Brain & Behavior today.

You are viewing a single thread.
View all comments View context
9 points

A breakthrough in quantum computing wouldn’t necessarily help. QC isn’t faster than classical computing in the general case, it just happens to be for a few specific algorithms (e.g. factoring numbers). It’s not impossible that a QC breakthrough might speed up training AI models (although to my knowledge we don’t have any reason to believe that it would) and maybe that’s what you’re referring to, but there’s a widespread misconception that Quantum computers are essentially non-deterministic turing machines that “evaluate all possible states at the same time” which isn’t the case.

permalink
report
parent
reply

I was more hinting at that through conventional computational means we’re just not getting there, and that some completely hypothetical breakthrough somewhere is required. QC is the best guess I have for where it might be but it’s still far-fetched.

But yes, you’re absolutely right that QC in general isn’t a magic bullet here.

permalink
report
parent
reply
6 points

Yeah thought that might be the case! It’s just a thing that a lot of people have misconceptions about so it’s something that I have a bit of a knee jerk reaction to.

permalink
report
parent
reply

Haha it’s good that you do though, because now there’s a helpful comment providing more context :)

permalink
report
parent
reply
3 points

the limitation is specifically using the primary machine learning technique, same one all chatbots use at places claiming to pursue agi, which is statistical imitation, is np-hard.

permalink
report
parent
reply

Not just that, they’ve proven it’s not possible using any tractable algorithm. If it were you’d run into a contradiction. Their example uses basically any machine learning algorithm we know, but the proof generalizes.

permalink
report
parent
reply

Technology

!technology@beehaw.org

Create post

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

Community stats

  • 2.8K

    Monthly active users

  • 1.7K

    Posts

  • 9.7K

    Comments