You are viewing a single thread.
View all comments
25 points
*

Yann and co. just dropped llama 3.1. Now there’s an open source model on par with OAI and Anthropic, so who the hell is going to pay these nutjobs for access to their apis when people can get roughly the same quality for free without the risk of having to give your data to a 3rd party?

These chuckle fucks are cooked.

permalink
report
reply
23 points

For “free” except you need thousands of dollars upfront for hardware and a full hardware/software stack you need to maintain.

This is like saying azure is cooked because you can rack mount your own PC

permalink
report
parent
reply
17 points
*

OpenAI is losing money on every user and has no moat other than subsidies from VCs, but that’s ok because they’ll make it up in volume.

permalink
report
parent
reply
11 points

That’s mostly true. But if you have a GPU to play video games on a PC running Linux, you can easily use Ollama and run llama 3 with 7 billion parameters locally without any real overhead.

permalink
report
parent
reply
8 points

The whole point of using these things (besides helping summon the Acausal Robot God) is for non-technical people to get immediate results without doing any of the hard stuff, such as, I don’t know, personally maintaining and optimizing an LLM server on their llinux gaming(!) rig. And that’s before you realize how slow inference gets as the context window fills up or how complicated summarizing stuff gets past a threshold of length, and so on and so forth.

permalink
report
parent
reply
13 points

Just an off-the-cuff prediction: I fully anticipate AI bros are gonna put their full focus on local models post-bubble, for two main reasons:

  1. Power efficiency - whilst local models are hardly power-sippers, they don’t require the planet-killing money-burning server farms that the likes of ChatGPT require (and which have helped define AI’s public image, now that I think about it). As such, they won’t need VC billions to keep them going - just some dipshit with cash to spare and a GPU to abuse (and there’s plenty of those out in the wild).

  2. Freedom/Control - Compared to ChatGPT, DALL-E, et al, which are pretty locked down in an attempt to keep users from embarrassing their parent corps or inviting public scrutiny, any local model will answer whatever dumbshit question you ask for make whatever godawful slop you want, no questions asked, no prompt injection/jailbreaking needed. For the kind of weird TESCREAL nerd which AI attracts, the benefits are somewhat obvious.

permalink
report
parent
reply
7 points

Azure/AWS/other cloud computing services that host these models are absolutely going to continue to make money hand over fist. But if the bottleneck is the infrastructure, then what’s the point of paying an entire team of engineers 650K a year each to recreate a model that’s qualitatively equivalent to an open-source model?

permalink
report
parent
reply
-7 points

Dumbass detected

permalink
report
parent
reply
9 points

Incoming ban from site detected.

permalink
report
parent
reply
15 points

Correct, they’re 👆 here

permalink
report
parent
reply
8 points

What’s your point

permalink
report
parent
reply

TechTakes

!techtakes@awful.systems

Create post

Big brain tech dude got yet another clueless take over at HackerNews etc? Here’s the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

Community stats

  • 1.5K

    Monthly active users

  • 418

    Posts

  • 11K

    Comments

Community moderators