cross-posted from: https://lemmy.ml/post/20858435
Will AI soon surpass the human brain? If you ask employees at OpenAI, Google DeepMind and other large tech companies, it is inevitable. However, researchers at Radboud University and other institutes show new proof that those claims are overblown and unlikely to ever come to fruition. Their findings are published in Computational Brain & Behavior today.
@JayDee AI as the wide, specialized field you mention makes no claims about building anything with *actual* human-like intelligence, I feel. People who understand how the math and code work in these systems know better than to do that.
And yes, “AGI” debate is a philosophical one. The problem is it is not recognized as such, because of the AI hype. People seem to think that AGI is “inevitable” and “just around the corner”, because salespeople from companies that benefit from that hype say so.
Alright, I see what you’re saying now. We’re on the same page.
As an additional thing regarding AGI, I think it should be noted that ‘human-level’ and ‘human-like’ are importantly distinct when talking about this topic.
In reality, if an AGI is ever created, it will most likely not be human-like at all. Humans think the way we do out of an evolutionary conditioning for survival, a history an AGI will not be coming from. One example given by Robert Miles is a staple making machine becoming an ASI, where it essentially would exist solely to make as many staples as it could with its hyperintelligence.
We mean to say that this AGI is a ‘human-level’ intelligence in that it can learn to utilize abstractions and tools, be able to function in a large variety of environments without intervention or training, and be able to learn in a realtime fashion.
Obviously, these criteria for any AI shows just how far away we are from achieving anything right now.these concepts are very vague and the arguments for each one’s impossibility or inevitability are equally vague and philosophical. It’s still mostly just stuffy academics arguing with each other.
One statement I agree with, though, comes from the AI safety collective: We don’t know what we’re doing, and we should really sort that out. If any of this is actually possible and we accidentally make an AGI/ASI before having any failsafes or contingencies, it could be very bad.
> We don’t know what we’re doing, and we should really sort that out.
True. But the bigger problem is not the mythical and hypothetical “AGI/ASI” stuff that maybe will happen one day, but very real harms already being caused by misuse and misapplication of algorithmic and “AI”-based systems.
So that’s what I think we should be focusing on instead.
True, I would say that there’s multiple issues dealing with AI that are more pressing:
- The massive amount of exploitation used to power AI
- The use of AI to distance the creators from the accidents the AI causes
- The danger of mass-proliferation of hazards across the web and globe
- the massive amount of waste created by AI
These aren’t all of them. One thing I’ve noticed, however, is that these aren’t really AI-specific issues - these are all issues caused by automation and lack of regulation. This lack of proactive regulation is also very likely a failing of our currently neoliberal government systems.
I think that is why so many AI hype-mongers draw attention towards A(G)I safety, because they don’t want attention drawn to the actual danger which is automation safety in general.