46 points

Translation: “We told everyone we could turn glorified autocomplete into artificial general intelligence and then they gave us a bunch of money for that, so now we actually have to try to deliver something and we’ve got no idea how.”

permalink
report
reply
14 points

How about giving billions to those guys simulating brains of a small worms and fruit flies, so we can have very slow “brain in a bottle” that will be equally useless.

permalink
report
parent
reply
18 points

You know what? Sure, fuck it, why not? I don’t even have a problem with OpenAI getting billions of dollars to do R&D on LLMs. They might actually turn out to have some practical applications, maybe.

My problem is that OpenAI basically stopped doing real R&D the moment ChatGPT became a product, because now all their money goes into their ridiculous backend server costs and putting increasingly silly layers of lipstick on a pig so that they can get one more round of investment funding.

AI is a really important area of technology to study, and I’m all in favour of giving money to the people actually studying it. But that sure as shit ain’t Sam Altman and his band of carnival barkers.

permalink
report
parent
reply
3 points

Carnival barkers 🤣

permalink
report
parent
reply
4 points

I mean this respectfully. The character Everett True is known as someone who tells the truth even when it’s not popular.

permalink
report
parent
reply
14 points

We’ve known this for a while. LLMs are a dead end, lots of companies have tried throwing more data at it but it’s becoming clear the differences between each model and the next are getting too small to notice, and none of them fix the major underlying issue that chat models keep spreading BS because it can’t differentiate between right and wrong

permalink
report
reply
4 points

So an infant technology is showing a glimmer of maturation?

permalink
report
parent
reply
1 point
*

And the thing is the architecture of LLMs was already a huge breakthrough in the field. Now these companies are basically trying to come up with another by - and that’s just my guess - throwing tons of cash at it and hoping for the best. I think that’s like trying to come up with a building material that outperforms steel concrete in every aspect. Just because it was discovered by some guy doesn’t mean multi billion dollar companies can force something better with all the money in the world.

permalink
report
parent
reply
7 points
*

Yeah, well Alibaba nearly (and sometimes) beat GPT-4 with a comparatively microscopic model you can run on a desktop. And released a whole series of them. For free! With a tiny fraction of the GPUs any of the American trainers have.

Bigger is not better, but OpenAI has also just lost their creative edge, and all Altman’s talk about scaling up training with trillions of dollars is a massive con.

o1 is kind of a joke, CoT and reflection strategies have been known for awhile. You can do it for free youself, to an extent, and some models have tried to finetune this in: https://github.com/codelion/optillm

But one sad thing OpenAI has seemingly accomplished is to “salt” the open LLM space. Theres way less hacky experimentation going on than there used to be, which makes me sad, as many of its “old” innovations still run circles around OpenAI.

permalink
report
reply
3 points

… “Alibaba (LLM)” … is it this ? … ?
Qwen2.5: A Party of Foundation Models!
https://qwenlm.github.io/blog/qwen2.5/

permalink
report
parent
reply
2 points

Yep.

32B fits on a “consumer” 3090, and I use it every day.

72B will fit neatly on 2025 APUs, though we may have an even better update by then.

I’ve been using local llms for a while, but Qwen 2.5, specifically 32B and up, really feels like an inflection point to me.

permalink
report
parent
reply
2 points
*

BTW, as I wrote that post, Qwen 32B coder came out.

Now a single 3090 can beat GPT-4o, and do it way faster! In coding, specifically.

permalink
report
parent
reply
2 points

Great news 😁🥂, someone should make a new post on this !

permalink
report
parent
reply
17 points

Predictable outcome for anyone not wallowing in wishful belief.

permalink
report
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 17K

    Monthly active users

  • 6.1K

    Posts

  • 132K

    Comments