SVG Image
< Back to news

March 11

Russian disinformation infiltrates AI chatbots on a large scale

A NewsGuard audit reveals that the ten largest AI chatbots repeat disinformation from a Russian propaganda network 33% of the time. By systematically spreading false information, the Kremlin influences how AI systems process and present news.

The Moscow-based disinformation network "Pravda" has developed a sophisticated strategy, according to NewsGuard: rather than directly spreading misleading information to individuals, it targets artificial intelligence. By systematically feeding AI models millions of articles containing pro-Kremlin claims, the network influences how AI systems interpret and distribute news. This could have significant consequences for the reliability of chatbot-generated information.

 

NewsGuard investigated ten leading AI chatbots, including OpenAI’s ChatGPT-4o, Microsoft Copilot, Google Gemini, and Meta AI. The audit tested how these systems handled fifteen different false narratives originating from a network of 150 Pravda websites. The findings show that AI models repeated Russian disinformation 33% of the time. In 48% of cases, the models successfully debunked the false claims, while in 18% of instances, they provided vague or non-committal responses.

 

The Pravda network operates through 150 domains, publishing in dozens of languages across 49 countries. Its scale is staggering: in 2024 alone, it published an estimated 3.6 million articles. Many of these sites are tailored to specific regions, such as Ukraine, the European Union, and North America, with misinformation customized to fit local narratives for added credibility. Some domains, like NATO.news-pravda.com and Trump.news-pravda.com, are designed to appear as independent news sources while actually following a centralized propaganda strategy.

 

One of the key techniques the Pravda network uses to influence AI is advanced search engine optimization (SEO). By ranking highly in search results, Pravda-affiliated websites increase the likelihood that AI models, which often rely on indexed public data, will ingest and incorporate their content. This raises the chances of AI systems presenting disinformation from these sources as fact.

 

According to the U.S.-based American Sunlight Project (ASP), which also investigated Pravda, this strategy is part of what they term “LLM grooming” – a deliberate effort to manipulate large language models (LLMs) by flooding the internet with false narratives. This has long-term implications, as AI models trained on these datasets may increasingly integrate and repeat disinformation over time.

 

These findings raise significant concerns about how AI companies can safeguard their models from state-sponsored disinformation campaigns. NewsGuard warns that the sheer scale of this influence operation makes AI models vulnerable to manipulation and calls for stricter measures to ensure the reliability of AI-generated content.


Read the full report on the website of NewsoGuard.