Title:
ChatGPT broke the Turing test
Content:
Other researchers agree that GPT-4 and other LLMs would probably now pass the popular conception of the Turing test. […]
researchers […] reported that more than 1.5 million people had played their online game based on the Turing test. Players were assigned to chat for two minutes, either to another player or to an LLM-powered bot that the researchers had prompted to behave like a person. The players correctly identified bots just 60% of the time
Complete contradiction. Trash Nature, it’s become only an extremely expensive gossip science magazine.
PS: The Turing test involves comparing a bot with a human (not knowing which is which). So if more and more bots pass the test, this can be the result either of an increase in the bots’ Artificial Intelligence, or of an increase in humans’ Natural Stupidity.
So if more and more bots pass the test, this can be the result either of an increase in the bots’ Artificial Intelligence, or of an increase in humans’ Natural Stupidity.
Or it “simply” plays with human biases, which are very natural. Stuff like seeing faces in everything that somewhat resembles two eyes and a mouth (or sometimes just the eyes and a head like shape etc.) is pretty hard wired. We have similar biases in regards to language. If something reads like it was written by a human, we immediately sympathize with it. Which is also the reason these LLMs are so successful and cause so many people to fear our AI overlords are right around the corner. Simply because the language is good we go into “damn, that’s like a human”-mode.