cross-posted from: https://lemmy.ml/post/20858435

Will AI soon surpass the human brain? If you ask employees at OpenAI, Google DeepMind and other large tech companies, it is inevitable. However, researchers at Radboud University and other institutes show new proof that those claims are overblown and unlikely to ever come to fruition. Their findings are published in Computational Brain & Behavior today.

24 points
*

Sounds really counterintuitive to say that it’s impossible.

The article says that we would run out of computing power, and that’s definitely true for current hardware and software. It’s just that they are being developed all the time, so I think we need to leave that door open. Who knows how efficient things can get within the next decade or century. The article didn’t even mention any fundamental obstacle that would make AGI completely impossible. It’s not like AGI would be violating the laws of physics.

permalink
report
reply
16 points

Whenever I hear someone say that something is impossible with current technology, I think about my grandma. When she was a kid, only some important people had telephones. Doctors, police, etc.

In her lifetime we went from that to today, and, since she’s still alive, even further into the future.

Whenever someone calls something impossible, I think about how far technology will progress in my own lifetime and I know that they’ve got no idea what they’re talking about. (Unless, like you said, it’s against the laws of physics. But sometimes even then I’m not so sure, cause it’s not like we understand those entirely. )

permalink
report
parent
reply
10 points

Let’s put it this way: If in our lifetime we can simulate the intelligence of a vinegar fly as general intelligence, that would be a monumental landmark in AGI. And we’re far, far, far away from it.

As far as the iron age was from the metal alloys used in the Space Shuttle.

Talking about AGI simulating higher intelligence at the level of a dog or a cat, dear I say a pigeon or a crow is as far fetched as expecting ancient Egyptians to harness the power of the atom.

permalink
report
parent
reply
6 points
*

Let’s put it this way: If in our lifetime we can simulate the intelligence of a vinegar fly as general intelligence, that would be a monumental landmark in AGI. And we’re far, far, far away from it.

I get what you mean here and I agree with it, if we’re talking about current “AI”, which isn’t anywhere close. I know, because I’ve programmed some simple “AIs” (Mainly ML models) myself.

But your comparison to ancient egypt is somewhat lacking, considering we had the aptly named dark ages between then and now.

Lot’s of knowledge got lost all the time during humanity’s history, but ever since the printing press, and more recently the internet, came into existence, this problem has all but disappeared. As long as humanity doesn’t nuke itself back to said dark ages, I recon we aren’t that far away from AGI, or at least something close to it. Maybe not in my lifetime, but another ~2000 years seems a little extreme.

permalink
report
parent
reply
13 points

That’s not an apt comparison.

More like “we’ll have flying cars 50 years from now.”

permalink
report
parent
reply
10 points

I love the flying car example because it reveals a huge issue with the whole “tech will get better” idea. People are still trying to make flying cars happen but it’s running in to the same fundamental issues; large things that are mechanically complex, energy intensive, and moving at high speeds in a crowded urban environments are just too expensive and dangerous.

There is no way around the physical realities, no clever trick or efficiency that will push it over some threshold of practicality.

permalink
report
parent
reply
3 points
*

Could take a while, but how long? Progress tends to be non-linear, so things can slow down and speed up suddenly. I’m pretty sure we’ll get there sooner or later unless we nuke ourselves to oblivion before that.

If AI development isn’t prioritized, it could take centuries. Maybe we’re still missing some crucial corner stores we haven’t even thought of yet. Just imagine what it was like to build an airplane in an age when the internal combustion engine hadn’t been invented yet. Maybe we’re still missing something that big. On the other hand, it could also be just around the corner, but I find it unlikely.

permalink
report
parent
reply
14 points

The thing is, we have no idea where technological progress is taking us. So far, most predictions have been wrong. 50 to 60 years ago, people thought we would already be colonizing other planets by now. Barely anyone was able to predict the Internet, smartphones, social media, etc. - the kind of technology that is actually shaping our civilization’s future right now.

Another aspect that I feel is often neglected is the assumption that technological progress will continue forever or at least continue at this current rapid pace. This wasn’t true in the past and we might simply be experiencing a historical anomaly right now, one that could correct itself very soon in the future, either towards stagnation or even regression.

permalink
report
parent
reply
4 points

This wasn’t true in the past and we might simply be experiencing a historical anomaly right now

While our exact pacing might be slightly different from the pure extrapolation, human history has been a long, steady increase in the rate of invention. Access to education has meant that more people are making things, and then the next generations build on top of their work to make even bigger things.

permalink
report
parent
reply
6 points

The space example is extremely apt. Its possible we could have had tons of space stations, a moon colony, maybe even some other stuff going on around the solar system, asteroid mining, etc. But thay would have at least required the space race to continue longer and for spending to grow to create a big enoigh industry to ensure thay outcome, assuming no capacity or time issue. Alas, we took another path.

Something that seems important to us might not matter in even 10 years, or at least, not have a monetary and/or societal incentive to keep advancing.

permalink
report
parent
reply
3 points

In addition, technological development can take unexpected twists and turns. For a while, it looked like analogue technology involving gears was going to solve every problem… until transistors were developed and mechanical calculators were soon forgotten. Also, the development of fertilizers revolutionized farming and and food production, which changed the world more than anyone even realized.

permalink
report
parent
reply
5 points

The fact that human brain is capable of general intelligence tells us everything we need to know about the processing power needed to run one.

permalink
report
parent
reply
7 points

Well it sets an upper bound on compute requirements at ‘simulate 10^27 atoms for thirty years’ remains to be seen if what we can optimize away ever converges with what’s feasible to build.

permalink
report
parent
reply
4 points

The article did mention a fundamental obstacle. It said quite clearly that we would run out of resources before we had enough computing power. I suppose you could counter that by arguing that we could discover magic, or magical technology, or a lot of new resources through space exploration.

Of course things get more efficient. But in the past few decades they’ve gotten efficient in predictable, and mostly predicted, ways. It’s certainly possible that totally unexpected things can happen. I could win the lottery next week. Is that the standard? Are you pushing the stance that says AGI is somewhat less likely than winning the lottery or getting struck by lightning, but by golly it’s more than zero, how dare you suggest that it’s anywhere close to zero?

permalink
report
parent
reply
2 points

It really depends on your assumptions. If you assume that software and hardware will stay at the current level, then the article does present a valid point. I would argue that those assumptions are only reasonable in the short term. AGI development does depend on some big technological changes we haven’t seen yet, so it could take decades or even a century, but I wouldn’t call it impossible.

If you assumed that 1950s style vacuum tube computers were the best thing ever, you could safely say that playing a game like fortnite with your buddies living in different countries is completely impossible. Modern semiconductors and integrated circuits would have seemed pretty magical in that context.

If we assume that we’re going to be stuck with silicon, you can safely say that AGI just isn’t going to happen with these tools and methods. Since quantum computers aren’t quite useful just yet and optical computers aren’t even in the news in any meaningful way, it seems that we will be stuck with silicon for quite some time. However, in the long term, you can’t really say that for sure. Technological developments have taken sudden and unpredictable jumps from time to time.

permalink
report
parent
reply
6 points

Actually, we do already know that we’re close to a theoretical limit of increasing computing power as we currently know it. The transistor can’t really get that much smaller, before it stops working.

Also, if you’re talking about the article as linked, that is a mere introduction to a much longer paper.

permalink
report
parent
reply
6 points

Possible or not I don’t think we’ll get to the point of AGI. I’m pretty sure at some point someone will do something monumentally stupid with AI that will wipe out humanity.

permalink
report
reply
9 points

Like wrecking the biosphere in its persuit.

permalink
report
parent
reply
1 point

Maybe. But I have a feeling it’ll be a dumb single mistake that’ll make someone say “ah, shit” just before we’re wiped out.

When the Soviets trained anti-tank dogs in WW2 they did so on tanks that weren’t running to save fuel: “Their deployment revealed some serious problems… In the field, the dogs refused to dive under moving tanks.” https://en.m.wikipedia.org/wiki/Anti-tank_dog

History is littered with these kinds of mistakes. It would only take one military AI with access to autonomous weapons to have a similar issue in it’s training data to potentially kill us all.

permalink
report
parent
reply
1 point

Why in God’s name would we put weapons that pose a legitimate threat to the whole of humanity under the control of an ai? I just don’t think this one sounds plausible.

permalink
report
parent
reply
1 point
  1. I’m so glad they weren’t “kamikaze” dogs.
  2. I very much expected that story to end with the dogs targeting Soviet tanks.
permalink
report
parent
reply
6 points

The steam engine won’t replace John Henry!!!

permalink
report
reply
6 points

Not really a good comparison. The steam engine was an extant technology at that point. AGI is not, and we really no idea if/when it will be. One thing is clear though, it is not as close on the horizon as tech bros want us to think it is.

permalink
report
parent
reply

I like SCUMM but AGI is okay I just don’t like typing commands

permalink
report
reply
9 points

Will AI soon surpass the human brain?
If you ask employees at OpenAI, Google DeepMind and other large tech companies, it is inevitable.

That doesn’t answer the question.
If it will happen is unrelated to When it will happen.
I’d expect we’ll see AGI some time between the next 20 and 200 years. I think that’s pretty soon. You may not.

permalink
report
reply
6 points

If there were a giant asteroid hurling toward Earth, set to impact sometime in the next 20 to 200 years, I’d say there’s definitely a need for urgency. A true AGI is somewhat of an asteroidal impact in itself.

permalink
report
parent
reply
3 points

A single AGI would not be to different from a human. But it may not take long for AGI to develop ASI, superior to human intelligence.

Thats not an astronaut impact but alien contact

permalink
report
parent
reply
1 point

A single AGI could be copied into a million copies near-instantly. That would be significant

permalink
report
parent
reply
2 points

None of those companies are suggesting 20 years. They’re suggesting much less than 10, and selling investors on that promise.

permalink
report
parent
reply

Technology

!technology@beehaw.org

Create post

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

Community stats

  • 2.8K

    Monthly active users

  • 1.7K

    Posts

  • 9.7K

    Comments