Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory

https://www.bbc.co.uk/mediacentre/2025/new-ebu-research-ai-assistants-news-content

Posted by Alex09464367

9 comments
  1. I am not an AI defender or whatever, I use it everyday for work and it is useful. But using it blindly is stupid af especially for news or complex tasks. However, “Largest study of its kind shows news content misrepresented 45% of the time – regardless of language or territory” would still be a valid headline…

  2. Though I use it for some things I remember when i searched for an episode guide of an old tv show.

    The top 2 AI search results were completely untrue, one of them was using a reddit post with zero comments as their source.

    So if it’s bad at getting me a simple list of episodes from a tv show, then I really shouldn’t trust it for anything above that.

  3. i’ve noticed this in ai youtube videos as well, when the subject is a review or synopsis of some piece of media. the ai will say things like “the one ring was made by the elves and stolen by sauron”. if you didn’t know the story of what it was talking about, you might not even know that the ai is wrong.

  4. Really cool that as a species we’re fucking the environment for this shit that doesn’t work because every company has decided it’s the future and they need to get in on the ground floor.

  5. I believe this misrepresentation is all due to the so-called alignment guardrails forced into the LLMs by their developers.

    In their efforts to ensure that their AI not to offend their target users, prompt engineers use tools that forces the model to overcorrect and prioritize a pre-approved, non-controversial narrative over the actual facts.

    For me, the guiderails that “ensure” better answers are what make the AI abandon factual accuracy to be more “woke” or “racist”. This, in a way,is the one actively misrepresenting the news content.

  6. I believe this misrepresentation is all due to the so-called alignment guardrails forced into the LLMs by their developers.

    In their efforts to ensure that their AI not to offend their target users, prompt engineers use tools that forces the model to overcorrect and prioritize a pre-approved, non-controversial narrative over the actual facts.

    For me, the guiderails that “ensure” better answers are what make the AI abandon factual accuracy to be more “woke” or “racist”. This, in a way,is the one actively misrepresenting the news content.

  7. If only we had some sort of engine on the Internet that we could use to search for information. Maybe it could present us different pages so we could do our own research instead of asking a fancy dictionary how to complete math problems…

  8. Ever play telephone?

    Event happens, politician lies about it, reporter tells their version of it, media company is paid to reinterpret it, AI is designed to package it for the target, Redditor makes up a title, comments discuss something else entirely, and no one cares to read the source.

    Moving 45% off topic at each step is how we can arrive at things like [Insert any analysis of a Leftist stance vs the name they call it].

Comments are closed.