According to a cross-border audit, if you depend on artificial intelligence to stay informed, you should be cautious. According to a study by the European Broadcasting Union and the BBC, leading robots are utterly failing to generate high-quality news summaries. Forty-five percent of the responses had at least one significant error, and twenty percent of the comments had significant levels of inaccuracy (e.g., fabricated facts or outdated material).
In 18 countries and 14 languages, professional journalists analysed thousands of AI-generated responses to current events. Among the systems analysed were ChatGPT, Copilot, Gemini, and Perplexity, which are tools that are gradually taking the place of conventional search and news feeds.
A deep dive into the cross-border AI news audit findings
All of the chatbots erred in regards to fundamental journalistic standards. Reviewers criticised a combination of fact and opinion, poor sourcing, and inaccuracies. With 76% of its responses judged to have serious problems—usually related to inadequate citation and unsubstantiated claims—Gemini was the least reliable.
What went wrong most often? Recurring failure modes included both hallucinations and summaries that smugly presented outdated information as though it had just been baked. On multilingual tests, the errors were systemic rather than isolated, indicating a more fundamental flaw in the way these models interpret and condense quickly changing content rather than a problem unique to English-language edge scenarios.
The report’s caution is unmistakable: When AI abuses news this regularly, public trust erodes. According to the EBU leadership, the flaws are systemic and global, and they have the potential to erode audiences’ faith in democratic processes and make them cynical.
The rise of AI tools and the decline of trust in news
Current events are already beginning to be accessible through AI storytelling tools, particularly for younger audiences. Less than 10% of people currently use AI tools to keep up with news, and more than one in seven people under 25 do so, according to the Reuters Institute’s Digital News Report 2025. But here’s where the disconnect lies: Seventy-five percent of American adults say they never get news from chatbots, according to a Pew Research Centre survey.
Furthermore, people usually neglect to verify the accuracy of what they read, especially when AI is being used for search. Low levels of confidence and poor click-through to sources are found in research on Google’s AI Overviews. Factual errors are most likely to proliferate in this setting of high convenience and little verification.
As synthetic media spreads, real-world risks grow
These are not academic matters. Generative video technologies, such as OpenAI’s Sora, have demonstrated how realistic fake film can look, from photorealistic representations of conflicts that never occurred to images of public figures who never consented. Watermarks can be removed, context can be broken, and the adage “seeing is believing” is no longer accurate.
When you add social media platforms, which are made for interaction rather than accuracy, the information environment becomes a tinderbox. By its very existence, AI does not polarise people; rather, it exacerbates the tendencies that have already split audiences and favoured sensationalism over careful information presentation.
AI news summaries often mislead — here’s why it matters
While large language models can be very effective at detecting patterns, they are not as good at fact-checking in real-time. They achieve this by distilling information from large amounts of training data, some of which may be out-of-date. They find it difficult to separate new evidence from out-of-date context unless recollection and sourcing are rigorous. When asked to give confident, concise takeaways, models may overstate their certainty and understate their cautions—exactly the opposite of how to do great journalism.
Sourcing is another weak point. Without documentation, readers are unable to follow claims to the source material. The study’s reviewers frequently pointed out this gap, which compromises accountability and makes it harder for readers to assess reliability.
How to stay informed and verify AI-generated news
AI is a starting point, not the last word. Keep an eye on timestamps, click through to the original news source, and favour companies that offer comprehensive approaches and corrections. Ask AI tools for sources, then independently confirm those sources.
Newsrooms are experimenting with AI-powered solutions to improve workflow efficiency, even though top publications and standards organisations strongly emphasise human control. Furthermore, The Associated Press cautions against using generative AI to create comprehensive, publishable news articles in the absence of stringent editorial oversight.Initiatives like the Coalition for Content Provenance and Authenticity are advocating for media provenance that is impervious to manipulation in terms of technology so that viewers can confirm the legitimacy of the content they are viewing.
In conclusion, start with news about AI instead of the facts
AI can assist you in skimming the headlines, despite the audit’s conclusion that it is an unreliable primary source.
Given that 20% of the examined responses were entirely wrong and 45% had serious problems, the risk is clear. Think of AI news summaries not as factual information but as suggestions for future research directions. Go through the reporting, click on the links, and allow your understanding to be guided by verified facts rather than combative rhetoric.