
Credit: Beata Zawrzel/NurPhoto via Getty
Posts on social-media platform X that are critical of scientific research can act as early warning signs of problematic articles, according to two large studies. The findings reflect how post-publication commentary can help to identify errors or fraudulent results, say scientists.
Previous research has shown that potentially problematic articles get substantial attention on social media before being formally retracted1, and receive more attention than do similar articles that do not get retracted2. Er-Te Zheng, a PhD student studying computational social science at the University of Sheffield, UK, wanted to find out whether social-media platforms such as X could be used to identify integrity issues in articles.
Zheng and his colleagues examined3 thousands of tweets that referenced articles that went on to be retracted and articles that didn’t. Of the 604 studies that went on to be retracted, the researchers found that 8.3% had at least one critical post on X before retraction, compared with only 1.5% of articles that were not retracted. Zheng and his colleagues defined critical tweets as posts that contained sarcasm, criticism, accusations or doubt about the article. They say that nearly 1 in 12 of the retracted articles could have been flagged to publishers for greater scrutiny on the basis of the social-media posts.
Red flags
Another analysis4 looked at whether the sentiment of tweets or inclusion of retraction-related terms influenced how quickly a paper was retracted. Hajar Sotudeh, who studies academic publishing at Shiraz University in Iran, and her colleagues analysed 1,200 retracted papers that were published between 2019 and 2022, and 16,500 posts on X about those papers. The researchers found that negative sentiment in tweets and the inclusion of any of 95 ‘red flag’ words — such as fraud, retract, hoax or flawed — were associated with an increased risk that the paper would be retracted.
Sotudeh and her team found that tweets containing red-flag words were associated with papers being retracted faster than those that had posts without these terms. Longer tweets were also associated with a faster retraction time than were shorter posts, but less so if red-flag words were also present. The team noted, however, that they could not show causality between negative X posts and retractions. Existing studies have not shown a causal relationship between criticism on social media and publishers’ decisions to retract or not.
These studies shows that social media platforms are an important place for post-publication commentary, says Virginia Barbour, co-chair of the Declaration on Research Assessment initiative, which aims to reform how scientific outputs are evaluated, and a researcher in academic publishing at the Queensland University of Technology in Brisbane, Australia. Post-publication commentary is needed because conventional peer review can miss errors or fraud, says Barbour.
Sotudeh says researchers and publishers are paying more attention to commentary online. But there is no guidance yet for how publishers can translate social media critiques into evidence-based editorial decisions. This can raise issues because posts calling for retractions can reflect a person’s biases rather than a scientific justification, she says. “If publishers are to give more weight to such discourse, they will need systems for screening, verifying, and contextualizing claims before taking action,” Sotudeh adds.