god, so this is actually the best the AI researchers can do with the tools they’ve shit out into the world without giving any thought to failure cases or legal liability (beyond their manager on slackTeams claiming it’s been taken care of)
so fuck it, let’s make the defamation machine a non-optional component of windows. we’ll just make it a P0 when someone who could actually get us in legal trouble complains! everyone else is a P2 that never gets assigned.
so this is actually the best the AI researchers can do
Highly unlikely. This is what corporation’s public facing products can do.
are there mechanisms known to researchers that Microsoft’s not using that can prevent this type of failure case in an LLM without resorting to whack-a-mole with a regex?
To be blunt, LLMs are one of the stupider ways to try and use AI. There is incredible potential in many other applications which don’t attempt to interface with something as irrational and unpredictable as people.
Yeah there’s already a lot of this in play.
You run the same query multiple times through multiple models and do a web search looking for conflicting data.
I’ve had copilot answer a query, then erase the output and tell me it couldn’t answer it after about 5 seconds.
I’ve also seen responses contradict themselves later paragraphs saying there are other points of view.
It would be a simple matter to have it summarize the output it’s about to give you and dump the output of it paints the subject in a negative light.
llms are (approximately) advanced versions of predictive text, any censorship will make them worse.