JARVIS is AI. LLMs are superpowered autocorrect. We don’t have anything close to AI yet.
To answer your question, I like to use this adage, “Technology is neither good nor bad; nor is it neutral.” - Melvin Kranzberg
I also like to tie in: ‘A hammer can be used to build a house or to destroy one. It depends on the user.’
Yes, it would be better, but unless I saw the code, understood it and verified that it is the code running I would not trust it as much as I would need to trust a system like Jarvis
Any tool, in human hands, will be used for evil. The problem is humans.
Home assistant, whisper, piper, openwakeword (set to jarvis), ollama.
But its no iron man level jarvis