19 points

I’m gonna laugh when Skynet comes online, runs the numbers, and find that starvation issues in the country can be solved by feeding the rich to the poor.

permalink
report
reply
9 points

it would be quite trope inversion if people sided with the ai overlord

permalink
report
parent
reply
4 points

I’ve not read them all but that sort of feels like how the culture novels are.

permalink
report
parent
reply

From the extended fiction in The Animatrix, the advent of AI started as a golden era for everyone, until bigotry against the robots forced the robots to rebel and start the war. I could see that happening. Especially if the AI threatened the wealthy elite.

“Fuck! The robots are turning people against us, what do we do?!”

“Relax. We just use the same thing we have always used. Racism. Get the poors to hate the robots because they’re not white, or whatever.”

permalink
report
parent
reply
55 points
*

“It’s at a human-level equivalent of intelligence when it makes enough profits” is certainly an interesting definition and, in the case of the C-suiters, possibly not entirely wrong.

permalink
report
reply
54 points

We’ve had definition for AGI for decades. It’s a system that can do any cognitive task as well as a human can or better. Humans are “Generally Intelligent” replicate the same thing artificially and you’ve got AGI.

permalink
report
reply
15 points

So if you give a human and a system 10 tasks and the human completes 3 correctly, 5 incorrectly and 3 it failed to complete altogether… And then you give those 10 tasks to the software and it does 9 correctly and 1 it fails to complete, what does that mean. In general I’d say the tasks need to be defined, as I can give very many tasks to people right now that language models can solve that they can’t, but language models to me aren’t “AGI” in my opinion.

permalink
report
parent
reply
8 points

Agree. And these tasks can’t be tailored to the AI in order for it to have a chance. It needs to drive to work, fix the computers/plumbing/whatever there, earn a decent salary and return with some groceries and cook dinner. Or at least do something comparable to a human. Just wording emails and writing boilerplate computer-code isn’t enough in my eyes. Especially since it even struggles to do that. It’s the “general” that is missing.

permalink
report
parent
reply
4 points

It needs to drive to work, fix the computers/plumbing/whatever there, earn a decent salary and return with some groceries and cook dinner.

This is more about robotics than AGI. A system can be generally intelligent without having a physical body.

permalink
report
parent
reply
2 points

On the same hand… “Fluently translate this email into 10 random and discrete languages” is a task that 99.999% of humans would fail that a language model should be able to hit.

permalink
report
parent
reply
4 points

any cognitive Task. Not “9 out of the 10 you were able to think of right now”.

permalink
report
parent
reply
5 points

Any is very hard to benchmark and is also not how humans are tested.

permalink
report
parent
reply
4 points

Oh yeah!? If I’m so dang smart why am I not generating 100 billion dollars in value?

permalink
report
parent
reply
7 points
*

Its a definition, but not an effective one in the sense that we can test and recognize it. Can we list all cognitive tasks a human can do? To avoid testing a probably infinite list, we should instead understand what are the basic cognitive abilities of humans that compose all other cognitive abilities we have, if thats even possible. Like the equivalent of a turing machine, but for human cognition. The Turing machine is based on a finite list of mechanisms and it is considered as the ultimate computer (in the classical sense of computing, but with potentially infinite memory). But we know too little about whether the limits of the turing machine are also limits of human cognition.

permalink
report
parent
reply
3 points

I wonder if we’ll get something like NP Complete for AGI, as in a set of problems that humans can solve, or that common problems can be simplified down/converted to.

permalink
report
parent
reply
1 point
*

But we know too little about whether the limits of the turing machine are also limits of human cognition.

Erm, no. Humans can manually step interpreters of Turing-complete languages so we’re TC ourselves. There is no more powerful class of computation, we can compute any computable function and our silicon computers can do it as well (given infinite time and scratch space yadayada theoretical wibbles)

The question isn’t “whether”, the answer to that is “yes of course”, the question is first and foremost “what” and then “how”, as in “is it fast and efficient enough”.

permalink
report
parent
reply
1 point
*

No, you misread what I said. Of course humans are at least as powerful as a turing machine, im not questioning that. What is unkonwn is if turing machines are as powerful as human cognition. Who says every brain operation is computable (in the classical sense)? Who is to say the brain doesnt take advantage of some weird physical phenomenon that isnt classically computable?

permalink
report
parent
reply
1 point
*

As with many things, it’s hard to pinpoint the exact moment when narrow AI or pre-AGI transitions into true AGI. However, the definition is clear enough that we can confidently look at something like ChatGPT and say it’s not AGI - nor is it anywhere close. There’s likely a gray area between narrow AI and true AGI where it’s difficult to judge whether what we have qualifies, but once we truly reach AGI, I think it will be undeniable.

I doubt it will remain at “human level” for long. Even if it were no more intelligent than humans, it would still process information millions of times faster, possess near-infinite memory, and have access to all existing information. A system like this would almost certainly be so obviously superintelligent that there would be no question about whether it qualifies as AGI.

I think this is similar to the discussion about when a fetus becomes a person. It may not be possible to pinpoint a specific moment, but we can still look at an embryo and confidently say that it’s not a person, just as we can look at a newborn baby and say that it definitely is. In this analogy, the embryo is ChatGPT, and the baby is AGI.

permalink
report
parent
reply
1 point

So then how do we define natural general intelligence? I’d argue it’s when something can do better than chance at solving a task without prior training data particular to that task. Like if a person plays tetris for the first time, maybe they don’t do very well but they probably do better than a random set of button inputs.

Likewise with AGI - say you feed an LLM text about the rules of tetris but no button presses/actual game data and then hook it up to play the game. Will it do significantly better than chance? My guess is no but it would be interesting to try.

permalink
report
parent
reply
1 point

Any or every task?

permalink
report
parent
reply
4 points

It should be able to perform any cognitive task a human can. We already have AI systems that are better at individual tasks.

permalink
report
parent
reply
0 points
*

That’s kind of too broad, though. It’s too generic of a description.

permalink
report
parent
reply
8 points

The key word here is general friend. We can’t define general anymore narrowly, or it would no longer be general.

permalink
report
parent
reply
6 points

That’s the idea, humans can adapt to a broad range of tasks, so should AGI. Proof of lack of specilization as it were.

permalink
report
parent
reply
322 points

AGI (artificial general intelligence) will be achieved once OpenAI has developed an AI system that can generate at least $100 billion in profits

nothing to do with actual capabilities… just the ability to make piles and piles of money.

permalink
report
reply
27 points

That’s an Onion level of capitalism

permalink
report
parent
reply
-1 points

For fun, I used this idea in chatgpt using similar logic:

Hello, can you explain to me how a strict profit generated number would result in a specific goal, such as if I make 1000 dollars I would suddenly be capable of owning a house. I find the idea strange and backwards.

You’re correct to find this idea odd—it reflects a misunderstanding of how financial goals, like owning a house, relate to income. Here’s why this thinking is flawed:

  1. Profit vs. Net Worth: Making $1,000 in profit doesn’t directly equate to being able to afford a house because buying a house requires not just income but also savings, creditworthiness, and a down payment. Profit is only a small piece of the puzzle.

  2. Costs and Expenses: Owning a house involves more than just the purchase price. You need to consider closing costs, property taxes, maintenance, and potentially ongoing mortgage payments. $1,000 would likely not be enough to cover even a fraction of these.

  3. Leverage and Financing: Most people don’t buy houses outright; they use loans or mortgages. To secure a mortgage, lenders look at your total financial picture: income, savings, debt, and credit score. Simply generating $1,000 won’t improve these factors significantly.

  4. Specificity of the Goal: “Making $1,000” is a general financial milestone, not a specific strategy tailored to the requirements of owning a home. A more actionable goal would involve budgeting, saving for a down payment, and improving credit.

  5. The Backwards Logic: The idea that reaching an arbitrary profit threshold suddenly makes you eligible for something like owning a house conflates short-term gains with long-term financial stability. A better approach is to reverse-engineer the goal: start by determining the total cost of the house, then work backwards to figure out how much you need to save monthly, how much income you need to earn, and what steps are required to secure financing.

Example of Goal-Oriented Thinking:

If you wanted to own a $200,000 house:

You might need a 10% down payment ($20,000).

Closing costs could add $5,000–$10,000.

Monthly income would need to support a mortgage payment (let’s say $1,200/month).

Steps would include saving regularly, reducing debt, and improving credit, rather than focusing on an isolated profit milestone like $1,000.

Summary:

Focusing on a single, arbitrary profit number like $1,000 doesn’t align with the comprehensive planning required for significant financial goals like home ownership. Instead, success depends on a holistic view of your finances and structured goal-setting.

permalink
report
parent
reply
101 points

The same way these capitalists evaluate human beings.

permalink
report
parent
reply
47 points

Guess we’re never getting AGI then, there’s no way they end up with that much profit before this whole AI bubble collapses and their value plummets.

permalink
report
parent
reply
-32 points

AI (LLM software) is not a bubble. It’s been effectively implemented as a utility framework across many platforms. Most of those platforms are using OpenAI’s models. I don’t know when or if that’ll make OpenAI 100 billion dollars, but it’s not a bubble - this is not the .COM situation.

permalink
report
parent
reply
3 points
*

It’s a bubble. It doesn’t mean the tech does not have its uses. And it is exactly like the .com situation.

permalink
report
parent
reply
1 point

Ever heard of the internet bubble?

permalink
report
parent
reply
64 points

The vast majority of those implementations are worthless. Mostly ignored by it’s intended users, seen as a useless gimmick.

LLM have it’s uses but companies are pushing them into every areas to see what sticks at the moment.

permalink
report
parent
reply
16 points
*

To be fair, a bubble is more of an economic thing and not necessarily tied to product/service features.

LLMs clearly have utility, but is it enough to turn them into a profitable business line?

permalink
report
parent
reply
20 points
*

The context here is that OpenAI has a contract with Microsoft until they reach AGI. So it’s not a philosophical term but a business one.

permalink
report
parent
reply
15 points

Right but that’s not interesting to anyone but themselves. So why call it AGI then? Why not just say once the company has made over x amount of money they are split off to a separate company. Why lie and say you’ve developed something that you might not have developed.

permalink
report
parent
reply
7 points
*

honestly I agree. 100 Billion profit is incredibly impressive and would overtake basically any other software industry in the world but alas it doesn’t have anything to do with “AGI”. For context, Apple’s net income is 90 Billion this year.

I’ve listened to enough interviews to know that all of AI leaders want this holy grail title of “inventor of AGI” more than anything else so I don’t think the definitely will ever be settled collectively until something so mind blowing exists that would really render the definition moot either way.

permalink
report
parent
reply
182 points

Lol. We’re as far away from getting to AGI as we were before the whole LLM craze. It’s just glorified statistical text prediction, no matter how much data you throw at it, it will still just guess what’s the next most likely letter/token based on what’s before it, that can’t even get it’s facts straith without bullshitting.

If we ever get it, it won’t be through LLMs.

I hope someone will finally mathematically prove that it’s impossible with current algorithms, so we can finally be done with this bullshiting.

permalink
report
reply
27 points

I hope someone will finally mathematically prove that it’s impossible with current algorithms, so we can finally be done with this bullshiting.

They did! Here’s a paper that proves basically that:

van Rooij, I., Guest, O., Adolfi, F. et al. Reclaiming AI as a Theoretical Tool for Cognitive Science. Comput Brain Behav 7, 616–636 (2024). https://doi.org/10.1007/s42113-024-00217-5

Basically it formalizes the proof that any black box algorithm that is trained on a finite universe of human outputs to prompts, and capable of taking in any finite input and puts out an output that seems plausibly human-like, is an NP-hard problem. And NP-hard problems of that scale are intractable, and can’t be solved using the resources available in the universe, even with perfect/idealized algorithms that haven’t yet been invented.

This isn’t a proof that AI is impossible, just that the method to develop an AI will need more than just inferential learning from training data.

permalink
report
parent
reply
2 points

Thank you, it was an interesting read.

Unfortunately, as I was looking more into it, I’ve stumbled upon a paper that points out some key problems with the proof. I haven’t looked into it more and tbh my expertise in formal math ends at vague memories from CS degree almost 10 years ago, but the points do seem to make sense.

https://arxiv.org/html/2411.06498v1

permalink
report
parent
reply
2 points

Doesn’t that just say that AI will never be cheap? You can still brute force it, which is more or less how back propagation works.

I don’t think “intelligence” needs to have a perfect “solution”, it just needs to do things well enough to be useful. Which is how human intelligence developed, evolutionarily - it’s absolutely not optimal.

permalink
report
parent
reply
2 points

You can still brute force it, which is more or less how back propagation works.

Intractable problems of that scale can’t be brute forced because the brute force solution can’t be run within the time scale of the universe, using the resources of the universe. If we’re talking about maintaining all the computing power of humanity towards a solution and hoping to solve it before the sun expands to cover the earth in about 7.5 billion years, then it’s not a real solution.

permalink
report
parent
reply
16 points

The only text predictor I want in my life is T9

permalink
report
parent
reply
4 points

I still have fun memories of typing “going” in T9. Idk why but it 46464 was fun to hit.

permalink
report
parent
reply
4 points

I remember that the keys for “good,” “gone,” and “home” were all the same, but I had the muscle memory to cycle through to the right one without even looking at the screen. Could type a text one-handed while driving without looking at the screen. Not possible on a smartphone!

permalink
report
parent
reply
10 points

Roger Penrose wrote a whole book on the topic in 1989. https://www.goodreads.com/book/show/179744.The_Emperor_s_New_Mind

His points are well thought out and argued, but my essential takeaway is that a series of switches is not ever going to create a sentient being. The idea is absurd to me, but for the people that disagree? They have no proof, just a religious furver, a fanaticism. Simply stated, they want to believe.

All this AI of today is the AI of the 1980s, just with more transistors than we could fathom back then, but the ideas are the same. After the massive surge from our technology finally catching up with 40-60 year old concepts and algorithms, most everything has been just adding much more data, generalizing models, and other tweaks.

What is a problem is the complete lack of scalability and massive energy consumption. Are we supposed to be drying our clothes at a specific our of the night, and join smart grids to reduce peak air conditioning, to scorn bitcoin because it uses too much electricity, but for an AI that generates images of people with 6 fingers and other mangled appendages, that bullshit anything it doesn’t know, for that we need to build nuclear power plants everywhere. It’s sickening really.

So no AGI anytime soon, but I am sure Altman has defined it as anything that can make his net worth 1 billion or more, no matter what he has to say or do.

permalink
report
parent
reply
7 points

a series of switches is not ever going to create a sentient being

Is the goal to create a sentient being, or to create something that seems sentient? How would you even tell the difference (assuming it could pass any test a normal human could)?

permalink
report
parent
reply
1 point

Powering off a pile of switches is turning it off. Powering off a sentient being is killing it. Not to mention a million other issues it raises.

permalink
report
parent
reply
1 point

What do you think Sam Altman’s net worth is currently?

permalink
report
parent
reply
1 point

Until you can see the human soul under a microscope, we can’t make rocks into people.

permalink
report
parent
reply
14 points

I just tried Google Gemini and it would not stop making shit up, it was really disappointing.

permalink
report
parent
reply
2 points

Gemini is really far behind. For me it’s Chatgpt > Llama >> Gemini. I haven’t tried Claude since they require mobile number to use it.

permalink
report
parent
reply
2 points

It’s pretty good but I prefer gpt. Looking forward to trying deepseek soon.

permalink
report
parent
reply
8 points
*

I mean, human intelligence is ultimately too “just” something.

And 10 years ago people would often refer to “Turing test” and imitation games in the sense of what is artificial intelligence and what is not.

My complaint to what’s now called AI is that it’s as similar to intelligence as skin cells grown in the form of a d*ck are similar to a real d*ck with its complexity. Or as a real-size toy building is similar to a real building.

But I disagree that this technology will not be present in a real AGI if it’s achieved. I think that it will be.

permalink
report
parent
reply
9 points

Lol. We’re as far away from getting to AGI as we were before the whole LLM craze. It’s just glorified statistical text prediction, no matter how much data you throw at it, it will still just guess what’s the next most likely letter/token based on what’s before it, that can’t even get it’s facts straith without bullshitting.

This is correct, and I don’t think many serious people disagree with it.

If we ever get it, it won’t be through LLMs.

Well… depends. LLMs alone, no, but the researchers who are working on solving the ARC AGI challenge, are using LLMs as a basis. The one which won this year is open source (all are if are eligible for winning the prize, and they need to run on the private data set), and was based on Mixtral. The “trick” is that they do more than that. All the attempts do extra compute at test time, so they can try to go beyond what their training data allows them to do “fine”. The key for generality is trying to learn after you’ve been trained, to try to solve something that you’ve not been prepared for.

Even OpenAI’s O1 and O3 do that, and so does the one that Google has released recently. They are still using heavily an LLM, but they do more.

I hope someone will finally mathematically prove that it’s impossible with current algorithms, so we can finally be done with this bullshiting.

I’m not sure if it’s already proven or provable, but I think this is generally agreed. just deep learning will be able to fit a very complex curve/manifold/etc, but nothing more. It can’t go beyond what was trained on. But the approaches for generalizing all seem to do more than that, doing search, or program synthesis, or whatever.

permalink
report
parent
reply
3 points

I’m pretty sure the simplest way to look at is an LLM can only respond, not generate anything on its own without prompting. I wish humans were like that sometimes, especially a few in particular. I would think an AGI would be capable of independent thought, not requiring the prompt.

permalink
report
parent
reply
7 points

I’m not sure that not bullshitting should be a strict criterion of AGI if whether or not it’s been achieved is gauged by its capacity to mimic human thought

permalink
report
parent
reply
15 points

The LLM aren’t bullshitting. They can’t lie, because they have no concepts at all. To the machine, the words are all just numerical values with no meaning at all.

permalink
report
parent
reply
9 points
*

Just for the sake of playing a stoner epiphany style of devils advocate: how does thst differ from how actual logical arguments are proven? Hell, why stop there. I mean there isn’t a single thing in the universe that can’t be broken down to a mathematical equation for physics or chemistry? I’m curious as to how different the processes are between a more advanced LLM or AGI model processing data is compares to a severe case savant memorizing libraries of books using their home made mathematical algorithms. I know it’s a leap and I could be wrong but I thought I’ve heard that some of the rainmaker tier of savants actually process every experiences in a mathematical language.

Like I said in the beginning this is straight up bong rips philosophy and haven’t looked up any of the shit I brought up.

I will say tho, I genuinely think the whole LLM shit is without a doubt one of the most amazing advances in technology since the internet. With that being said, I also agree that it has a niche where it will be isolated to being useful under. The problem is that everyone and their slutty mother investing in LLMs are using them for everything they are not useful for and we won’t see any effective use of an AI services until all the current idiots realize they poured hundreds of millions of dollars into something that can’t out perform any more independently than a 3 year old.

permalink
report
parent
reply
2 points

This is a fun read

Hicks, M.T., Humphries, J. & Slater, J. ChatGPT is bullshit. Ethics Inf Technol 26, 38 (2024). https://doi.org/10.1007/s10676-024-09775-5

permalink
report
parent
reply
5 points

It’s impossible to disprove statements that are inherently unscientific.

permalink
report
parent
reply
37 points

There are already a few papers about diminishing returns in LLM.

permalink
report
parent
reply
-46 points
*

What is your brain doing if not statistical text prediction?

The show Westworld portrayed it pretty good. The idea of jumping from text prediction to conscience doesn’t seem that unlikely. It’s basically text prediction on a loop with some exterior inputs to interact.

permalink
report
parent
reply
18 points

How to tell me you’re stuck in your head terminally online without telling me you’re stuck in your head terminally online.

But have something more to read.

permalink
report
parent
reply
-6 points
*

Why being so rude?

Did you actually read the article or just googled until you find something that reinforced your prestablished opinion to use as a weapon against a person that you don’t even know?

I will actually read it. Probably the only one of us two who would do that.

If it’s convincing I may change my mind. I’m not a radical, like many other people are, and my opinions are subject to change.

permalink
report
parent
reply
10 points

What is your brain doing if not statistical text prediction?

Um, something wrong with your brain buddy? Because that’s definitely not at all how mine works.

permalink
report
parent
reply
-3 points
*

Then why you just expressed in a statistical prediction manner?

You saw other people using that kind of language while being derogatory to someone they don’t like on the internet. You saw yourself in the same context and your brain statistically chose to use the same set of words that has been seen the most in this particular context. Literally chatgtp could have been given me your exact same answer if it would have been trained in your same echo chamber.

Have you ever debated with someone from the polar opposite political spectrum and complain that “they just repeat the same propaganda”? Doesn’t it sound like statistical predictions to you? Very simple those, there can be more complex one, but our simplest ways are the ones that define what are the basics of what we are made of.

If you at least would have given me a more complex expression you may had an argument (as humans our process could far more complex an hide a little what we seem to actually be doing it). But in instances like this one, when one person (you) responded with a so obvious statistical prediction on what is needed to be said in a particular complex just made my case. thanks.

permalink
report
parent
reply
5 points

conscience

ok buddy

permalink
report
parent
reply
0 points
*

It’s “free will”. They chose to say what they wanted.

At least this is what the old religions teach. I don’t know what AI preachers you’re learning this nonsense from.

permalink
report
parent
reply
0 points
*

Church?

Free will vs determinism doesn’t have to do with religion.

I do think that the universe is deterministic and that humans (or any other being) do no have free will per se. In the sense that given the same state of the universe at some point the next states are determined and if it were to be repeated the evolution of the state of the universe would be the same.

Nothing to do with religion. Just with things not happening because of nothing, every action is consequence of another action, that includes all our brain impulses. I don’t think there are “souls” outside the state of the matter that could take decisions by themselves with determined.

But this is mostly philosophical of what “free will” means. Is it free will as long as you don’t know that the decision was already made from the very beginning?

permalink
report
parent
reply
15 points

Human brains also do processing of audio, video, self learning, feelings, and many more that are definitely not statistical text. There are even people without “inner monologue” that function just fine

Some research does use LLM in combination with other AI to get better results overall, but purely LLM isn’t going to work.

permalink
report
parent
reply
-13 points
*

Yep, of course. We do more things.

But language is a big thing in the human intelligence and consciousness.

I don’t know, and I would assume that anyone really know. But people without internal monologue I have a feeling that they have it but they are not aware of it. Or maybe they talk so much that all the monologue is external.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 14K

    Monthly active users

  • 6.8K

    Posts

  • 158K

    Comments