AI swallows misinformation and spits it back at us: ‘It’s optimized to give useful information, but not for it to be correct’
ChatGPT and Grok absorb fake news and regurgitate it when asked. Due to their design, these models and their competitors are highly susceptible to disinformation

It’s most evident when it comes to events happening in real time. During the protests in Los Angeles, California Governor Gavin Newsom posted images of soldiers sent by U.S. President Donald Trump sleeping on the floor. It was seen as a symbol of poor preparation, casting doubt on the need for the National Guard — a move criticized by the Democratic authorities leading the city and the state. But conspiracy theorists claimed the photos had been generated with AI or were from a different time. Amid the confusion, some users turned to ChatGPT or Grok, X’s AI, to clarify the situation.
Surprisingly, ChatGPT claimed that the images were probably taken during Joe Biden’s inauguration in 2021, while Grok stated they belonged to soldiers during the evacuation of Afghanistan, also in 2021. These claims echoed throughout the tangled web of information surrounding the Los Angeles protests. Social media posts, blog articles, and unreliable news outlets spread hoaxes and unverified data: the AI systems swallowed the disinformation and then repeated it unfiltered. Grok even refused to retract a false response when a user pointed out that it was wrong.
UNACCEPTABLE! This is how Trump is treating troops currently deployed in Los Angeles. Sources said “The troops arrived without federal funding for food, water, fuel, equipment or lodging… This person said state officials and the California National Guard were not to blame” Wow. pic.twitter.com/8rk3lWuNAo
— Harry Sisson (@harryjsisson) June 9, 2025
“These chatbots are optimized to give you useful information, but not for it to be correct,” says Julio Gonzalo, researcher and professor of computer languages and systems at Madrid’s National University of Distance Education (UNED). “They have no real verification mechanism. And if they read something many times, it’s more probable that they will give it back to you as a response.”
One study by NewsGuard, an organization that analyzes misinformation, found that in January, the 10 most popular AI tools repeated false information. They did so up to 40% of the time when asked about opinions that conflicted with current events. Among the systems evaluated were ChatGPT, Google Gemini, Grok, Copilot, Meta AI, Anthropic’s Claude, Mistral’s Le Chat and Perplexity.
“We have seen that these chatbots are constantly exposed to a contaminated informational ecosystem in which information on untrustworthy websites takes priority, because they are well positioned in terms of metrics, audience or user participation,” says Chiara Vercellone, NewsGuard senior analyst.
The problem becomes more complex when it comes to breaking news, which tends to involve a certain amount of confusion. “Particularly in moments in which there is not a lot of information about a piece of recent news. Or when there are events that take place in locations were there is not a lot of reliable information,” says Vercellone. “These chatbots rely on unreliable information and present it to users.”
When they have to respond to controversial topics where misinformation is widespread, AI systems lose effectiveness. “Models are trained by reading absolutely everything they can find. The process is very costly and has an expiration date,” says Gonzalo. That means that models have only read information up to a certain date, and are unaware of anything published after that.
The UNED researcher explains how AIs obtain information about current events: “Chatbots launch an internet search and read its results to make you a summary. That’s how they are able to manage information in almost real time.” This includes viral content on social media and information that is repeated by various sources. “As tools with which to inform oneself about current events, they are very dangerous. They are going to regurgitate what they’ve read, depending on how you ask them for information,” says Gonzalo.
Content moderation problems
When Elon Musk acquired Twitter, he destroyed the social network’s system for content moderation. It was replaced by “community notes,” which allow users to add context to a post as a way to combat misinformation. Meta announced in January that its platforms Facebook, Instagram and Threads would end third-party fact-checking in favor of a system based on “community notes”.
“Now we’re faced with platforms that have recently removed filters. There is no content moderation, and there is even more misinformation on the internet,” says Carme Colomina, global politics and misinformation researcher at the Barcelona Center for International Affairs, speaking about social media. “The quality of information on the internet is even worse now, which means that the training of AIs is compromised from the beginning.”
When training an AI model, data quality is essential. But these systems are typically fed at random, using an enormous volume of content. And distinctions are rarely made when it comes to what sources they use. “Search engines have mechanisms to establish authority, but as of now, language models do not. That makes them more manipulable,” says Gonzalo.
The researcher does say that such rules could be introduced during programming. “You can restrict the language model’s searches to sites that you consider reliable. For example, you can stop them from getting information from Forocoches [a popular Spanish website similar to 8kun]. There’s the potential to restrict the sites they feed off of. Although, these are then later combined with the model’s internal knowledge.”
It’s not a simple job. There are malicious actors who specifically publish false information to contaminate AI systems. This practice already has its own name: LLM (large language model) grooming. “It’s a technique for manipulating chatbot responses. It deliberately introduces misinformation which then becomes training data for AI chatbots,” says Colomina.
NewsGuard has analyzed one of these entities, which goes by the name of Pravda Network, and in a recent report, concluded that the 10 principal AI model sucked up and spit out its misinformation in 24% of all cases. It’s hard to exclude Pravda’s content through restrictions. Its websites publish thousands of articles every week and often repost content that comes from state propaganda sites. In addition, the organization creates new sites constantly, which makes tracking them complicated.
The subject is even more thorny due to the growing popularity of AI. “More and more people use AI chatbots as their preferred search engine and trust the information they give them without reflecting on its reliability or credibility,” says Vercellone, who suggests the classic anti-misinformation tip of verifying the information one receives by using trusted sources.
Colomina identifies yet another troubling aspect of all this: “What most concerns me about this subject is that we have already incorporated AI in our decisions, not just personal ones, but also administrative and political. AI is incorporated at all levels of society. And we are letting it make decisions based on information that we think is neutral, when in reality, it is much more subjective,” she warns.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition
Tu suscripción se está usando en otro dispositivo
¿Quieres añadir otro usuario a tu suscripción?
Si continúas leyendo en este dispositivo, no se podrá leer en el otro.
FlechaTu suscripción se está usando en otro dispositivo y solo puedes acceder a EL PAÍS desde un dispositivo a la vez.
Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.
¿Tienes una suscripción de empresa? Accede aquí para contratar más cuentas.
En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.
Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.