Online disinformation

Video Script

Combating disinformation – moving beyond takedowns and legal interventions 

By now, you probably have a good sense of how complex disinformation is: from how we define it, to how it’s regulated, and most of all, what we can – and should – do about it. In the blog post, we looked at the broad range of categories that fall under the umbrella term ‘disinformation’. This includes who is responsible for creating and spreading it – whether it’s government actors, private individuals, or automated bots – and the medium they use, for instance disinformation campaigns on Facebook and Twitter or through so-called ‘peer-to-peer’ networks, like WhatsApp. We also talked about the variation in the targets of disinformation, and in the harms caused to individuals and, sometimes, to broader society. Finally, the blog post touched on how disinformation impacts freedom of expression, and some key rights issues that must be borne in mind when it comes to how it’s regulated. 

The infographic built on this regulation piece, outlining the differentiated responses to disinformation so far from the private sector, civil society, states, and regional organisations. These efforts range from self-regulation by platforms, through to imposed regulation by states and regional bodies like the European Union. 

In this video, we’re going to round out this knowledge package by going beyond the legal efforts to reign in disinformation, and look at broader technological developments and educational campaigns to counter the harmful effects of disinformation (…) without blocking or removing speech. The first is technological solutions that arise at the source: that is, solutions that aim to slow the spread or limit the reach of harmful speech, such as disinformation. The second is media and information literacy initiatives, which target the audience and seek to limit the harms and disruption caused by disinformation. We’ll look at each of these solutions in turn. 

In Technology We Trust? 

A kneejerk response to harmful speech – like hate speech and disinformation – is to simply remove it from platforms altogether, and in some cases, to suspend or ban the individuals responsible for sowing and spreading it. But this raises all kinds of concerns for the right to freedom of expression and other rights: who gets to decide what constitutes harmful speech? Does the assessment depend on the broader social and political context? What if the platforms get it wrong, or take it too far? For instance, during the pandemic, YouTube removed a video posted by a professor of medical and scientific research at Stanford University. In the video, the Professor examined data relating to COVID-19, questioned the need for ongoing lockdowns and urged a more targeted response to protect the most vulnerable. YouTube removed the video on the basis that it contained ‘medical misinformation’ – a stance which has received significant criticism for removing legitimate medical information and critical commentary by an expert in the field.  

There are all kinds of issues with takedowns, whether they’re done by platforms of their own volition, to avoid liability or political or societal backlash, or through state regulation which may capture legitimate speech. – a criticism often made of Germany’s NetzDG legislation which we touched on in the infographic. Of course, takedowns may also be ordered pursuant to (legitimate) court orders, following a process which considers the right to freedom of expression and whether the restriction is proportionate in a democratic society. But even still, there are other ways to limit the reach of disinformation, without banning or restricting the speech itself.  We’re going to look at two examples. 

(i) Adding Friction 

The first is to add so-called ‘friction’ to slow the spread of disinformation.1 This can take different forms, such as warnings and notifications before individuals can share certain content, or limiting the number of times an item can be shared by a single account.   

Adding warnings and notifications has been shown to work to limit the spread of harmful content. I’ll give you an example. A researcher at Princeton University conducted a large-scale study of a Reddit community with 13 million subscribers, to see whether displaying community rules could reduce concerns about harassment and influence the behaviour of participants on the forum. The subreddit that was the focus of the study hosted discussions about peer-reviewed journal articles and live Q&As with scientists – but it was also a hotbed of harassment. Commenters mocked Professor Stephen Hawking’s medical condition during a Q&A in 2015, while a discussion about research to do with obesity in women resulted in the removal of almost 1,500 out of 2,200 comments. The study assessed the impact of displaying community rules at the top of a discussion.

The notice explained the kinds of comments that weren’t allowed – from memes to abusive comments – and advised that a community of 1,200 moderators encourages respectful discussion. 

The experiment found that displaying the community rules not only influenced who joined discussions, but also how they behaved. The study concluded that “in online discussions, where unruly, harassing behavior is common, displaying community rules could reduce concerns about harassment that prevent people from joining while also influencing the behavior of those who do participate”.2   

I know what you’re probably thinking – this might not work so well with disinformation, right? Well, there are other ways to add friction which might be even more relevant for disinformation. In India, for example, when concerns were raised that WhatsApp was being used to circulate misinformation leading to real-world violence, Facebook (which owns WhatsApp) put limits on the number of users who could be forwarded news items.3  

(ii) Dissuading Sharing 

Short of limiting the number of people who can receive posts or messages, there are ways to dissuade individuals from sharing harmful content. One example was the #WeCounterHate project. AI would flag content as potential hate speech, and where that was confirmed by a human moderator, a message would be generated to alert the poster – and anyone who saw it – that the hate speech was being countered, urging individuals to think twice before retweeting it, and advising that ever retweet would result in a donation to a non-profit fighting for equality.4 The project’s pilot phase showed a lot of promise: from reducing the retweet rate of hate speech by more than 65%, to one in five authors of hateful tweets deleting them.  

In Education We Trust? 

One of the other ways to combat the harms of disinformation is through media and information literacy initiatives. This could include tips on how to identify so-called ‘fake news’, or explanations about how platforms use algorithms to promote certain content online and curate our news feeds, in part based on previous likes and dislikes. Unlike measures which remove content online, media and information literacy initiatives focus on how to better prepare users to discern what’s true and what’s false; what’s authentic and what’s not. This piece of the puzzle should not be overlooked: the internet is a bottomless pool of information, but it’s where many of us now go to get our news and learn about the world around us. Ensuring that we are better consumers of news may be an important means of countering the harms of disinformation and restricting or limiting its spread. Moreover, some studies have shown that increased media and information literacy leads to decreased sharing of inaccurate stories or ‘fake news’.5  

A large-scale study looked at the effectiveness of digital media literacy in the US and India, and found that “relatively short, scalable interventions could be effective in fighting misinformation around the world”.6 

Other surveys have shown more mixed results: for instance, studies in the US suggest that digital literacy measures may prove useful to identify people with inaccurate beliefs – or to help people identify misinformation – but are less effective when it comes to deterring individuals from spreading misinformation online.7  

Perhaps one of the reasons that accounts for the different results in these studies is what is meant by “digital literacy”. For instance, the studies that showed mixed results defined media literacy as “familiarity with basic concepts related to the internet and social media” – which is quite a low bar. By contrast, civil society organisations in many countries throughout Europe and beyond  have launched digital literacy campaigns in schools, starting from primary school. These campaigns have as their goal “active, responsible citizens and voters”.8  Some media and information literacy initiatives specifically target disinformation. Take, for instance, ‘Lie Detectors’, an ’independent and award-winning Media Literacy organisation’ which works to ‘equip young people and teachers to tell apart fact from falsehood and opinion online’.9 Journalists deliver sessions explaining how journalism works and walking children through the basics of fact-checking and media bias. In Finland, students are tasked with creating a fake news campaign to better understand how and why disinformation is created and shared.10 

Conclusion 

Given the complexity of disinformation, there’s unlikely to be a ‘silver bullet’ which can stop or limit its spread, or avoid all of the potential harms it causes. But as we’ve shown, there are different tools in the toolkit beyond removals and takedowns which do not pose the same risks to long-standing rights like freedom of expression. These include adding friction into online discourse to make people think twice before sharing disinformation, and incorporating media and information literacy initiatives in schools and in broader public education efforts. After all, in technology and education we trust.  


  1. See Molly Land and Rebecca Hamilton, ‘Beyond Takedown: Expanding the Toolkit for Responding to Online Hate’ in Propaganda, War Crimes Trials and International Law: From Cognition to Criminality 143 (Routledge 2020) at
    https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3514234; NYU Stern Center for Business and Human Rights, ‘Harmful Content: The Role of Internet Platform Companies in Fighting Terrorist Incitement and Politically Motivated Disinformation,’ (New York 2017) at https://issuu.com/nyusterncenterforbusinessandhumanri/docs/final.harmful_content._the_role_of_?e=31640827/54951655
  2. J. Nathan Matias, ‘Preventing harassment and increasing group participation through social norms in 2,190 online science discussions,’ Proceedings of the National Academy of Sciences of the United States of America (PNAS), 14 May 2019, at https://www.pnas.org/content/116/20/9785. See also Land & Hamilton, pp 9-10.
  3. Land & Hamilton, p 10; V. Ananth, ‘WhatsApp Races Against Time to Fix Fake News Mess Ahead of 2019 General Elections,’ The Economic Times (24 July 2018) at https://economictimes.indiatimes.com/tech/internet/whatsapp-races-against-time-to-fix-fake-news-mess-ahead-of-2019-general-elections/articleshow/65112280.cms/.
  4. See https://www.forbes.com/sites/afdhelaziz/2019/12/25/the-power-of-purpose-how-we-counter-hate-used-artificial-intelligence-to-battle-hate-speech-online/; Land & Hamilton, p. 13
  5. https://www.tandfonline.com/doi/full/10.1080/23311983.2022.2037229; https://thehill.com/changing-america/enrichment/education/598795-media-literacy-is-desperately-needed-in-classrooms/
  6. Andrew M Guess et al, “A digital media literacy intervention increases discernment between mainstream and false news in the United States and India” (PNAS, 7 July 2020) at https://www.pnas.org/content/117/27/15536.
  7. Harvard Kennedy School, Misinformation Review: Digital literacy is associated with more discerning accuracy judgments but not sharing intentions (6 December 2021) at https://misinforeview.hks.harvard.edu/article/digital-literacy-is-associated-with-more-discerning-accuracy-judgments-but-not-sharing-intentions/; Sarah Brown, “Study: Digital literacy doesn’t stop the spread of misinformation” (MIT Management Sloan School, 5 January 2022) at https://mitsloan.mit.edu/ideas-made-to-matter/study-digital-literacy-doesnt-stop-spread-misinformation.
  8. The Guardian, “How Finland starts its fight against fake news in primary schools” (29 January 2020) at https://www.theguardian.com/world/2020/jan/28/fact-from-fiction-finlands-new-lessons-in-combating-fake-news;  MediaSmarts, Canada’s Centre for Digital and Media Literacy at https://mediasmarts.ca/. See also Land & Hamilton, pp 10-11.
  9. p 23; https://www.etwinning.net/downloads/BOOK2021_eTwinning_INTERACTIF.pdf
  10. https://www.theguardian.com/world/2020/jan/28/fact-from-fiction-finlands-new-lessons-in-combating-fake-news