2021 disinfo research highlights

1 comment
  1. This looks like basically a list of links to disinformation research papers in 2021 that the EU East Stratcom people consider particularly significant, with what they found.

    What I’d call the highlights of the highlights:

    “Superspreaders” — a few people who have a very large impact — are a significant factor:

    >Another interesting piece by Kai-Cheng Yang, Francesco Pierri and Pik-Mai Hui compared the spread of low-credibility content on Facebook and Twitter. What both platforms have in common is the presence of “superspreaders” – a minority of influential users who generate the majority of content (including toxic). On both platforms there is evidence of coordinated, inauthentic behaviour.

    Conspiracy theory believers get attracted to other conspiracy theories:

    >Once down the rabbit hole, it’s hard to get out, as belief in one conspiracy theory often leads to belief in another, and so it goes.

    It’s not that people are politicized so much as that they just are too ignorant in the areas in question to evaluate the truth of what they’re reading:

    >They provide evidence contradicting the common narrative that partisanship and politically motivated reasoning explain why people fall for mis- and disinformation. Rather, they link poor truth discernment to a lack of reasoning and relevant knowledge.

    Debunking does not make people “dig their heels in”:

    >Many of you have probably heard the statement that fact-checking can backfire and make people dig deeper into their beliefs. Hence, it should be avoided. Most stories that claim this and refer to a source, if at all, are pointing to one of the two studies conducted about a decade ago. However, almost none of the later studies have managed to repeat those results.

    It’s more effective to explain that something is false after someone has heard it than before:

    >The authors found that providing fact-checks after headlines (i.e. debunking) improved subsequent truth discernment more than providing the same information during (i.e. labelling) or before (i.e. prebunking) exposure.

    Simple image memes are a significant factor:

    >Regarding deepfakes, Michael Yankoski, Walter Scheirer and Tim Weninger advise us to focus on popular, not perfect, fakes in their research piece on meme warfare. Although a possible threat, they argue that sophisticated faked content isn’t the most pressing problem. Instead, the challenge lies in detecting and understanding much more crudely produced and widely available content: memes.

Leave a Reply