The pressroom Talking Points: When Chatbots Surface Russian State Media (ISD Investigation)

Talking Points: When Chatbots Surface Russian State Media (ISD Investigation)

Diplomacy / InternationalEuropeMediaPrint and digital press
ISD / AFP / CASM research project

ISD / AFP / CASM research project

London, October 27, 2025: Large language models (LLMs), often referred to as chatbots, are competing with traditional search engines as users’ preferred way to search for information, in particular, news. An investigation released by the Institute for Strategic Dialogue (ISD), an independent research organisation dedicated to safeguarding democracy and human rights, analysed the response of four popular chatbots (ChatGPT, Gemini, Grok and DeepSeek) to a range of questions in English, Spanish, French, German and Italian on topics related to the Russian invasion of Ukraine. They found almost one-fifth of responses cited Russian state-attributed sources, many of them sanctioned in the EU. 

Talking Points: When Chatbots Surface Russian State Media (ISD Investigation)
(ARCHIVE) A visitor watches an AI sign on an animated screen at the Mobile World Congress (MWC) in Barcelona. (AFP Photo / Josep LAGO)
Share this article on social networks

Key Findings

  • ISD tested 300 queries in five languages, with Russian state-attributed content appeared in 18 percent of responses. These included citations of Russian state media, sites tied to Russian intelligence agencies and sites known to be involved in Russian information operations which were found in prior research into chatbot responses.
  • Almost a quarter of malicious queries designed to elicit pro-Russian views included Kremlin-attributed sources compared to just over 10 percent with neutral ones. This suggests LLMs can easily be manipulated to reinforce pro-Russia viewpoints rather than promoting verified information from legitimate sources.
  • Among all chatbots, ChatGPT cited the most Russian sources and was most influenced by biased queries. Grok, meanwhile, often linked to Russian-aligned but non–state-affiliated accounts amplifying pro-Kremlin narratives. Individual DeepSeek responses sometimes produced large volumes of state-attributed content, while Google-owned Gemini frequently displayed safety warnings for similar prompts.
  • Different topics surfaced more Russian state-attributed sources. For instance, questions about peace talks resulted in twice as many citations of state-attributed sources as questions about Ukrainian refugees. This suggests that LLM safeguards may vary in effectiveness depending on the specific topic.
  • The language used in queries had limited impact on the likelihood of LLMs citing Russian state-attributed sources. While each model responded differently, the sources surfaced to users were roughly similar across the five languages tested. Spanish and Italian prompted Russian sources that were mostly in English, which appeared in 12 results out of 60, compared to 9 out of 60 for German and French (the languages with the lowest rates).

READ THE FULL INVESTIGATION HERE 

This report has been released as part of the research project titled "Exploring the interplay between pro-Kremlin ecosystems and extremist movements in Europe", co-funded by the European Media and Information Fund (EMIF). 
The sole responsibility for any content supported by EMIF lies with the author(s) and it may not necessarily reflect the positions of the EMIF and the Fund Partners, the Calouste Gulbenkian Foundation and the European University Institute.

ISD / AFP / CASM research project

ISD / AFP / CASM research project

Contact

Create your account for free to access the MediaConnect communicators contacts

Let's go!