They’re already deployed and they’re less than helpful, because LLMs are bullshitting machines.
I already use LLMs to problem solve issues that I’m having and they’re typically better than me punching questions into Google. I admit that I’ve once had an llm hallucinate while it was trying to solve a problem for me, but the vast majority of the time it has been quite helpful. That’s been my experience at least. YMMV.
If you think LLMs suck, I’m guessing you haven’t actually used telephone tech support in the past 10 years. That’s a version of hell I wish on very few people.
If all you want is something trivial that’s been done by enough people beforehand, it’s no surprise that something approaching correct gets parroted back at you.
That’s 99% of what I’m looking for. If I’m figuring something out by myself, I’m not looking it up on the internet.
I’m an engineer and I’ve found LLMs great for helping me understand an issue. When you read something online, you have to translate from what the author is saying into your thinking and I’ve found LLMs are much better at re-framing information to match my inner dialog. I often find them much more useful than google searches in trying to find information.
If you think LLMs suck, I’m guessing you haven’t actually used telephone tech support in the past 10 years. That’s a version of hell I wish on very few people.
I’m specifically claiming that they’re bullshit machines. i.e. they’re generating synthetic text without context or understanding. My experience with search engines and telephone support is way better than what any LLM fed me.
There have already been cases where phone operators where replaced with LLMs which gave dangerops advice to anorexig patients.
I understand their limitations, but you’re overselling the negative. They’re fucking awesome for what they can do, but they have drawbacks that you must be aware of. Just as it’s lame to be an AI fanboi, it’s equally lame to be an AI luddite.