I have been using ChatGPT because it was the big name early on and I have never really looked into any alternatives. With the rapid growth of AI assisted services, I am curious to hear what others are using.
Asked several to write a c implementation of some basic networking stuff.
ChatGPT: needed to refine my input, got reasonable output. Complete answers, just compile and run.
Google: the output was just a few snippets, nothing to be used as-is.
MSFT: terrible output, and -no suprise here- the compiled code crashed with null pointer references etc. The worst answers ever.
For simple problems (programming low-level microcontrollers), my go to will be ChatGPT everytime.
Google should get it’s act together, Microsoft can exit the stage.
There are no good LLMs.
I’m sure many don’t have the hardware to run local, but for most things that will probably work just as well as the full models, plus you can modify them and experiment. Start with Ollama as the base to run them, and see what works best. I tend to primarily use the edited uncensored versions of llama3 like the Neural Daredevil variations.
But just remember at any model’s base, even the biggest and best, they are at the core a predictor. This works great for some uses, not so well for others. Don’t use a screwdriver for a hammer…at least not until they merge them to be able to do both well.
No one mentioned Phind or Perplexity, both are niiice.
Llama3 local is pretty good