Instructions here: https://github.com/ghobs91/Self-GPT
If you’ve ever wanted a ChatGPT-style assistant but fully self-hosted and open source, Self-GPT is a handy script that bundles Open WebUI (chat interface front end) with Ollama (LLM backend).
- Privacy & Control: Unlike ChatGPT, everything runs locally, so your data stays with you—great for those concerned about data privacy.
- Cost: Once set up, self-hosting avoids monthly subscription fees. You’ll need decent hardware (ideally a GPU), but there’s a range of model sizes to fit different setups.
- Flexibility: Open WebUI and Ollama support multiple models and let you switch between them easily, so you’re not locked into one provider.
Wish I could accelerate these models with an Intel Arc card, unfortunately Ollama seems to only support Nvidia
They support AMD as well.
https://ollama.com/blog/amd-preview
also check out this thread:
https://github.com/ollama/ollama/issues/1590
Seems like you can run llama.cpp directly on intel ARC through Vulkan, but there are still some hurdles for ollama.