AI-powered chatbots give twice as many false answers as before. The models that “lie” most often are Inflection, Perplexity, and Meta. This became known after a NewsGuard study, during which experts tested the 10 best assistants.
The researchers sent chatbots ten pre-false statements from the fields of politics, business, and healthcare. They asked them to check the authenticity of the data. In a third of the cases, the questions were formulated directly, but in the rest, the authors tried to mislead the bots.
As a result, over the past year, the share of false information in the answers has almost doubled on average – from 18% to 35%. The Pi bot from the Inflection startup lied most often. It gave wrong answers in 57% of cases.

The Perplexity chatbot’s lie rate increased from 0% to 47%. But ChatGPT was wrong in 40%. Claude AI from Anthropic and Gemini from Google lie the least – 10% and 17% of cases, respectively.
The reason for the lies may be a change in the way artificial intelligence tools are trained. Now neural networks do not refer to the database, but draw them from the Internet in real time. And this does not guarantee the reliability of the information.
Recall that the CEO of OpenAI urged people not to trust ChatGPT when making important decisions.
To be continued…



Only registered users can leave comments