This. It is like any tool. It is down to the skill/knowlege/experience of the user to evaluate the result.
But as soon as management/government start seeing it as a cheat to reduce hiring. It become a danger.
I think the issue with this particular tool is it can authoritatively provide incorrect, entirely fabricated information or a gross misinterpretation of factual information.
In any field I’ve worked in, I’ve often had to refer to reference material as I simply can’t remember everything. I have to use my experience and critical thinking skills to determine if I’m utilizing the correct material. I have not had to further determine if my reference material has simply made up a convincing, “correct sounding” answer. Yes, there are errors and corrections to material over time, but never has the entire reference been suspect, yet it continued to be used.
i maintain that AI companies could improve their stuff a huge amount by simply forcing it to prefix “I think” to all statements. It’s sorta like how calculators shouldn’t show more data than it can confidently produce, if the precision is only 4 decimals then don’t show 8.
Imagine an AI with a model trained exclusively on a specific set of medical books, the same set of books all doctors have access to already. While there’s still room for error it would guide the doctor to a very familiar reference. No internet junk, social media, etc.
Exactly as you say. It’s a tool, not a replacement. Certainly not in healthcare anyway.
I would prefer this to no healthcare until it’s too late which seems to be the option in places with free healthcare.