96 points

Technically correct ™

Before you get your hopes up: Anyone can download it, but very few will be able to actually run it.

permalink
report
reply
23 points

What’s the resources requirements for the 405B model? I did some digging but couldn’t find any documentation during my cursory search.

permalink
report
parent
reply
40 points
*

Typically you need about 1GB graphics RAM for each billion parameters (i.e. one byte per parameter). This is a 405B parameter model. Ouch.

Edit: you can try quantizing it. This reduces the amount of memory required per parameter to 4 bits, 2 bits or even 1 bit. As you reduce the size, the performance of the model can suffer. So in the extreme case you might be able to run this in under 64GB of graphics RAM.

permalink
report
parent
reply
21 points

Typically you need about 1GB graphics RAM for each billion parameters (i.e. one byte per parameter). This is a 405B parameter model.

permalink
report
parent
reply
13 points

Or you could run it via cpu and ram at a much slower rate.

permalink
report
parent
reply
8 points
*

At work we habe a small cluster totalling around 4TB of RAM

It has 4 cooling units, a m3 of PSUs and it must take something like 30 m2 of space

permalink
report
parent
reply
4 points

When the 8 bit quants hit, you could probably lease a 128GB system on runpod.

permalink
report
parent
reply
3 points

Can you run this in a distributed manner, like with kubernetes and lots of smaller machines?

permalink
report
parent
reply
2 points

According to huggingface, you can run a 34B model using 22.4GBs of RAM max. That’s a RTX 3090 Ti.

permalink
report
parent
reply
1 point

Ypu mean my 4090 isn’t good enough 🤣😂

permalink
report
parent
reply
1 point
*

Hmm, I probably have that much distributed across my network… maybe I should look into some way of distributing it across multiple gpu.

Frak, just counted and I only have 270gb installed. Approx 40gb more if I install some of the deprecated cards in any spare pcie slots i can find.

permalink
report
parent
reply
12 points

405b ain’t running local unless you got a proepr set up is enterpise grade lol

I think 70b is possible but I haven’t find anyone confirming it yet

Also would like to know specs on whoever did it

permalink
report
parent
reply
8 points
*

I’ve run quantized 70B models on CPU with 32 gigs but it is very slow

permalink
report
parent
reply
2 points
*

I regularly run llama3 70b unqantized on two P40s and CPU at like 7tokens/s. It’s usable but not very fast.

permalink
report
parent
reply
6 points

As a general rule of thumb, you need about 1 GB per 1B parameters, so you’re looking at about 405 GB for the full size of the model.

Quantization can compress it down to 1/2 or 1/4 that, but “makes it stupider” as a result.

permalink
report
parent
reply
12 points
*

This would probably run on a a6000 right?

Edit: nope I think I’m off by an order of magnitude

permalink
report
parent
reply
2 points

“an order of magnitude” still feels like an understatement LOL

My 35b models come out at like Morse code speed on my 7800XT, but at least it does work?

permalink
report
parent
reply
8 points
*

When the RTX 9090 Ti comes, anyone who can afford it will be able to run it.

permalink
report
parent
reply
3 points

That doesn’t sound like much of a change from the situation right now.

permalink
report
parent
reply
4 points

So does OSM data. Everyone can download the whole earth but to serve it and provide routing/path planning at scale takes a whole other skill and resources. It’s a good thing that they are willing to open source their model in the first place.

permalink
report
parent
reply
19 points

Wake me up when it works offline “The Llama 3.1 models are available for download through Meta’s own website and on Hugging Face. They both require providing contact information and agreeing to a license and an acceptable use policy, which means that Meta can technically legally pull the rug out from under your use of Llama 3.1 or its outputs at any time.”

permalink
report
reply
33 points
*

WAKE UP!

It works offline. When you use with ollama, you don’t have to register or agree to anything.

Once you have downloaded it, it will keep on working, meta can’t shut it down.

permalink
report
parent
reply
1 point
*

Well, yes and no. See the other comment, 64 GB VRAM at the lowest setting.

permalink
report
parent
reply
9 points

Oh, sure. For the 405B model it’s absolutely infeasible to host it yourself. But for the smaller models (70B and 8B), it can work.

I was mostly replying to the part where they claimed meta can take it away from you at any point - which is simply not true.

permalink
report
parent
reply
14 points
*

It’s available through ollama already. i am running the 8b model on my little server with it’s 3070 as of right now.

It’s really impressive for a 8b model

permalink
report
parent
reply
1 point

Intriguing. Is that an 8gb card? Might have to try this after all

permalink
report
parent
reply

Yup, 8GB card

Its my old one from the gaming PC after switching to AMD.

It now serves as my little AI hub and whisper server for home assistant

permalink
report
parent
reply
12 points
*

I’m running 3.1 8b as we speak via ollama totally offline and gave info to nobody.

https://ollama.com/library/llama3.1

permalink
report
parent
reply
4 points

I was able to set up small one via open webui.

It did ask to make an account but I didn’t see any pinging home when I did it.

What am I missing here?

permalink
report
parent
reply
1 point

Through meta…

That’s where I stop caring

permalink
report
parent
reply
16 points

Yo this is big. In both that it is momentous, and holy shit that’s a lot of parameters. How many GB is this model?? I’d be able to run it if I had an few extra $10k bills lying around to buy the required hardware.

permalink
report
reply
22 points

its around 800gb

permalink
report
parent
reply
5 points

God damn.

permalink
report
parent
reply
3 points

That’s some thick model

permalink
report
parent
reply
1 point

Time to buy a thread ripper and 800gb of ram so that I can run this model at 1 token per hour.

permalink
report
parent
reply
11 points

Kind of petty from Zuck not to roll it out in Europe due to the digital services act… But also kind of weird since it’s open source? What’s stopping anyone from downloading the model and creating a web ui for Europe users?

permalink
report
reply
2 points

Did anyone get 70b to run locally?

If so what, what hardware specs?

permalink
report
reply
5 points

Afaik you need about 40GB of vram for a 70b model.

permalink
report
parent
reply
3 points

Can’t you offload some of it to RAM?

permalink
report
parent
reply
7 points

Same requirements, but much slower.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 18K

    Monthly active users

  • 5.1K

    Posts

  • 92K

    Comments