I’d like to self host a large language model, LLM.
I don’t mind if I need a GPU and all that, at least it will be running on my own hardware, and probably even cheaper than the $20 everyone is charging per month.
What LLMs are you self hosting? And what are you using to do it?
My (docker based) configuration:
Linux > Docker Container > Nvidia Runtime > Open WebUI > Ollama > Llama 3.1
Docker: https://docs.docker.com/engine/install/
Nvidia Runtime for docker: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
Open WebUI: https://docs.openwebui.com/
I run locally mistral-nemo in my 1070-ti
If you don’t need to host but can run locally, GPT4ALL is nice, has several models to download and plug and play with different purposes and descriptions, and doesn’t require a GPU.
I second that. Even my lower-midrange laptop from 3 years ago (8GB RAM, Integrated AMD GPU) can run a few of the smaller LLMs, and it’s true that you don’t even need a GPU as they can run in RAM. And depending on how much RAM you have and what GPU, you might find models performing better in RAM instead of on the GPU. Just keep in mind that when a model says, for example, 8GB Memory required, if you have 8GB RAM, you can’t run it cuz you also have your operating system and other applications running. If you have 8GB video memory on your GPU though, you should be golden (I think).
Ollama, llama3.2, deepcode and a bunch of others.
Using a GPU but man they’re picky, they mostly want Nvidia gpus.
Do NOT be afraid to run on the cpu. It’s slow, but for 1 user it’s actually mostly fine.