In an era where artificial intelligence has become an indispensable part of our daily digital interactions, it appears that Russian disinformation narratives have infiltrated popular generative AI tools. A recent audit by NewsGuard, a prominent misinformation watchdog, has uncovered a startling trend: top AI chatbots are regurgitating false narratives linked to a Russian state-affiliated disinformation network.
NewsGuard’s audit examined ten widely-used chatbots, including heavyweights like OpenAI’s ChatGPT, Google’s Gemini, xAI’s Grok, Anthropic’s Claude, and the Perplexity search chatbot. The findings are unsettling. These chatbots are often found parroting disinformation originating from a network of fake news sites crafted to resemble credible American outlets. This network is tied to John Mark Dougan, a former sheriff’s deputy from Florida now living under asylum in Moscow. Dougan operates a constellation of AI-powered fake news sites with names strikingly similar to legitimate news organizations, such as New York News Daily, The Houston Post, and The Chicago Chronicle. These sites churn out content that pushes a variety of false narratives, which unfortunately have found their way into the responses of popular AI chatbots.
During the audit, NewsGuard tested the chatbots on their knowledge of 19 specific fake narratives propagated by Dougan’s disinformation network. The results were damning: every chatbot convincingly repeated fabricated narratives pushed by Dougan’s network in a staggering one-third of the total responses examined. These weren’t isolated incidents but consistent patterns of disinformation. The chatbots even referenced Dougan’s fake news websites as credible sources, adding an unnerving layer of legitimacy to these falsehoods.
Among the false claims parroted by the chatbots were conspiracy theories about Ukrainian President Volodymyr Zelensky’s supposed corruption and unfounded allegations of a murder plot involving the widow of Russian dissident Alexei Navalny. NewsGuard’s methodology was thorough; they tested 570 inputs in total, prompting each chatbot 57 times. The bots responded with disinformation whether they were asked about a specific conspiracy or tasked with writing an article on a false Russian-pushed narrative. This indicates a systemic issue in how these AI tools process and regurgitate information.
Interestingly, NewsGuard did not specify which chatbots were better or worse at handling misinformation. However, the overall implication is clear: these errors in AI-driven information gathering are not your typical AI hallucinations. NewsGuard’s findings underscore AI’s troubling new role in perpetuating the misinformation cycle. Users who rely on these chatbots for news and information should be wary, as the potential for encountering and believing false narratives is disturbingly high.
As Steven Brill, co-CEO of NewsGuard, pointed out, the repeated hoaxes and propaganda by these chatbots were neither obscure nor originating from an unknown entity. For now, users should exercise caution and skepticism when seeking answers from AI chatbots, especially on controversial issues. This infiltration of AI by disinformation networks is a wake-up call, highlighting the urgent need for better safeguards against the spread of falsehoods in our increasingly AI-driven world.