In an era where misinformation spreads rapidly and scientific credibility is under constant scrutiny, the choices science journalists make about which studies to cover matter more than ever. Journal reputation, impact factors and perceived prestige often shape these decisions – but at what cost to diversity, accuracy and public trust in science?
To explore how reporters navigate these reputation-driven cues, Technology Networks spoke with Alice Fleerackers, PhD, assistant professor of journalism and civic engagement in the Department of Media Studies at the University of Amsterdam, whose work examines how journalists assess the trustworthiness of scientific journals. Fleerackers sheds light on the hidden dynamics behind science reporting, the risks of over-relying on prestige and how critical research literacy could reshape the future of science communication.
Isabel Ely, PhD (IE):
What inspired you and your collaborators to focus on how science journalists navigate reputation-driven cues like impact factors and journal prestige in their reporting decisions?
Alice Fleerackers, PhD (AF):
One of our earlier studies found that science journalists are sensitive to controversies within the academic world, including debates around impact metrics and their role in research assessment. In that study, some journalists were highly critical of these reputational cues, while others seemed to find them unproblematic. This finding, along with other hints scattered throughout the literature, was part of the inspiration to dig deeper into journalists’ strategies for evaluating a journal’s trustworthiness.
More broadly, this study was part of a larger project aimed at helping readers assess whether journals adhere to scholarly standards – especially as concerns about misinformation and predatory publishing rise. We chose to examine journalists’ assessment strategies because of their impact on what research the public sees. If journalists use problematic criteria to decide which studies to report on, this shapes what science we learn about – as well as what remains hidden from view.
IE:
What do you see as the potential downstream consequences for public understanding of science when media predominantly cover research from legacy journals like Nature?
AF:
These journals publish lots of high-quality, important research that the public should know about. But they also publish problematic studies, like any journal can. Relying on a journal’s reputation, without critically assessing the quality of the research it publishes, means highly flawed studies can make headlines. People and policymakers, in turn, might make life decisions based on unreliable findings.
IE:
Your paper suggests journalists rely heavily on the journal’s reputation – often at the expense of diversity in scientific reporting. Can you elaborate on how this dynamic might particularly marginalize research from newer or Global South journals?
AF:
One of the cues journalists used to evaluate a journal’s trustworthiness was whether there were spelling mistakes, grammar issues or typos. Journalists explained that this was because a “high-quality” journal would carefully edit anything it published. But this isn’t always the practice case – sometimes, the responsibility for proofreading falls on the authors of the study, not the journal editor.
Since the dominant language of journal publishing remains English, this is a disadvantage for scholars writing in a language that is not their first (or even their second), like many scholars in the Global South. In addition, journalists were often reluctant to report on a journal they’d never encountered before, meaning smaller, newer journals are less likely to get coverage than those that are already well-established.
IE:
You mention that open‑access (OA) journals were sometimes considered less reliable. Could you share more about why that perception persists?
AF:
There’s a longstanding misconception that OA journals are predatory within academia, and this misconception seems to have trickled down into how journalists understand predatory publishing. At the heart of the connection between OA and predatory publishing are article processing charges (APCs). Some completely legitimate OA journals charge authors APCs to publish their work (known as “gold OA” or “hybrid OA” journals), but so do predatory publishers. As a result, some scholars – and apparently journalists – believe any journal that asks you to pay to publish could be predatory.
It’s funny, because some of the research journals journalists trusted most, like those in the Nature portfolio, also publish OA research (often for an exorbitant fee). On the flip side, many OA journals don’t charge authors APCs – known as diamond OA journals.
IE:
What recommendations do you have for journalism educators or media outlets to help reporters critically assess unfamiliar journals or lesser-known research?
AF:
IE:
What does your ideal future look like in terms of how scientific findings are selected, reported and received by the public?
AF:
In an ideal world, both journalists and the public would have what I will call “critical research literacy” – not just knowledge about scientific facts, but deep, reflexive knowledge about how science works. This includes understanding the value and logic of the scientific system, but also its pitfalls: that peer review is an imperfect quality control mechanism, that scientists face intense pressure to publish, and that this, along with biases and self-interest, can impact the quality and integrity of scientific work, just as in any other profession.
Science is an incredible way of gaining knowledge. However, to benefit from the wealth of scientific knowledge available to us, we must understand its flaws and limitations. In journalistic terms, it means moving from a “trust, but verify” mindset to a “verify, then trust” approach.