Safeguarding freedom of expression amidst disinformation on online platforms
Katie Pentney & Menno Muller
Disinformation – or so-called ‘fake news’ – seemingly surrounds us: from politicians spreading it, to newscasters reporting on it; from pollsters asking us about it, to social media sites warning us against it. In the last few years, it’s been hard to shake the feeling that disinformation is a pervasive problem without an easy solution.
It’s unsurprising, then, that combatting disinformation has become an increasing pressure point for states and regional organisations like the EU. Concerns have grown about the extent of disinformation in the online ecosystem, the disruption it causes to public discourse, and online disinformation’s spill-over effects in the real world. In addition, the spread of disinformation and the efforts taken to date to address it have set off alarm bells because of their impacts on rights long-recognised, including the right to freedom of expression, the right to non-discrimination, and the right to an effective remedy.
Disinformation is a much-talked about, but somewhat nebulous, phenomenon. This blogpost goes back to basics to address what disinformation is and why it’s a concern from a human rights perspective. For more on what’s being done to address it by states and regional organisations, platforms, civil society, the media and users, check out this infographic.
Defining disinformation
Disinformation is not a new problem. It dates back to ancient Rome, but the scale and reach of online disinformation, the myriad actors involved and the complexities of any attempts to combat it are decidedly modern challenges. To understand disinformation, and come up with effective responses to it, it is necessary to first name and shame the problem.
To start with, what exactly is meant by ‘disinformation’, particularly in the online realm? The definition is constantly evolving, but it commonly refers to false information that is knowingly created and shared in order to cause harm (drawn from this Council of Europe report on ‘information disorder’ by Claire Wardle and Hossein Derakhshan). There are three distinct requirements for this threshold to be met: (i) the information must be false; (ii) the distributor of the information must know it to be false; and (iii) the intention in sharing it must be to cause ‘harm’. Other definitions are more limited: for instance, the one used by the European Commission requires that the false information is created and shared for ‘economic gain or to intentionally deceive the public’ (see the European Commission’s Communication; see also this Report by the European Regulators Group for Audiovisual Media Services). These threefold requirements also distinguish disinformation from other types of ‘information disorder’, like misinformation (where no harm is intended but false information is shared) and malinformation (where the information is genuine but used in such a way as to cause harm, for instance, where it is used selectively or in a misleading fashion).
A (now infamous) example illustrates the interplay of the three elements of disinformation – falsity, knowledge and intention to cause harm. Following the American election of President Biden in 2020, outgoing President Trump (and his team) asserted voter fraud and questioned the integrity (and veracity) of the electoral result on a near-daily basis until the inauguration of President-elect Biden, despite knowing the claims were false. The harm resulting from these claims was immediate (a violent insurrection on Capitol Hill), but it may also prove long-lasting.
This is but one example of many: disinformation in respect of COVID-19, climate change and the ongoing war in Ukraine have proliferated online in recent years, with devastating (and sometimes deadly) consequences. It is little wonder, then, that states and regional organisations like the EU and the Council of Europe have tried to intervene. But there is significant variation in the kinds of disinformation that pervade online forums, which is one of the reasons that online disinformation has proven so difficult to combat. The variation can be categorised along the following lines:
- the actors responsible: disinformation is sowed and spread by a range of actors, from government actors to private individuals; from foreign disinformation campaigns to automated ‘bots’;
- the medium used: disinformation may be spread on large social media platforms, like Facebook, Twitter, YouTube and Reddit, or on peer-to-peer communications networks like WhatsApp;
- the targets: the targets of disinformation may be individualized or localized (for instance, disinformation about a particular individual or event) or broad and diffuse (targeting minority groups or women or particular states, for instance); and
- the harms caused: the harms caused fall along a spectrum. Of least concern is disinformation without real-world impact (that is, disinformation which was neither believed nor acted on); at the opposite end and thus of most concern is disinformation that harms individual rights or democratic values, such as the rule of law.
The spread of disinformation and the measures taken to address it by states, regional organisations, platforms, civil society, the media and users raise several human rights challenges, in particular in relation to freedom of expression.
What’s all the fuss about?
States, regional organisations and platforms have increasingly sought to implement measures to counter the (potentially) harmful effects of online disinformation. These efforts have varied: national governments have enacted laws to address disinformation during elections; the EU has adopted the Digital Services Act and sought to combat the spread of disinformation through its portal, ‘EUvsDisinfo’; and platforms like Instagram and Twitter have issued content warnings and de-platformed users for violating their terms of service.
This is a marked departure from these actors’ largely ‘hands-off’ approach to online disinformation: platforms, for one, initially did nothing to stem the tide of disinformation and other kinds of harmful speech (like so-called ‘hate speech’) – a position sharply criticised in the wake of Facebook’s role in the genocide in Myanmar. But the proliferation of harmful content online – including disinformation – and platforms’ increasing efforts to curb it through so-called ‘content moderation’ – posed problems, too. In particular, measures to curb harmful speech online – whether imposed by states or undertaken by platforms of their own volition – largely centred on removals and takedowns of content, as well as suspensions and user bans. These can be quite serious interferences with individuals’ right to express themselves, and the lack of transparency or remedy in the case of algorithmic or human error in moderating content led to calls for better oversight by states and greater transparency by platforms (see for instance the Santa Clara Principles).
Part of the challenge in addressing disinformation is that unlike unlawful speech – such as terrorist content, child pornography or some particularly serious forms of hate speech, which can be legitimately restricted – disinformation usually falls into the category of ‘lawful but awful’ speech. We may not like it, and it may be deeply problematic for public discourse; but any restrictions on individuals’ lawful speech must be narrowly circumscribed for two important and interrelated reasons.
First, as previous blog posts in this series have explained, freedom of expression – as protected under Article 10 of the European Convention on Human Rights (ECHR) – does not merely protect information and ideas we find palatable, but also those that ‘offend, shock or disturb’ (Handyside v. United Kingdom at [49]). While disinformation about electoral results, COVID vaccinations or the truthfulness and trustworthiness of the media may fall into one (or all) of these categories, traditional interpretations of freedom of expression suggest that it protects individuals’ right to impart such disinformation, and the public’s right to receive it. Indeed, McGonagle notes that ‘the protection afforded by Article 10 ECHR is not limited to truthful information’ (p. 208).
Second, we are naturally wary of states – or others in positions of power and authority – limiting the information that can come into the public domain, including when such restrictions arise from their own beliefs about what information is ‘true’ and what is ‘false’. One of the main rationales for protecting freedom of expression is the assumption that ‘truth will most likely surface when all opinions may freely be expressed, when there is an open and unregulated market for the trade in ideas’ (Schauer, p. 16). This too – notwithstanding many historical examples of falsity trumping truth – seems to argue against the regulation of (dis)information, or at least to call into question how to do so in a way that duly respects freedom of expression.
This is not to suggest that freedom of expression is an all-out bar to any efforts to combat disinformation. To the contrary, freedom of expression protects not only individuals’ right to impart and receive information; it also protects the public’s right to be informed [Sunday Times (no. 1) v. United Kingdom at [66]). This strikes at the heart of another of the main rationales for freedom of expression: ensuring that the citizenry is informed, and armed with the information necessary to hold government to account. These mechanisms for accountability – an informed electorate, free and fair elections, a Fourth Estate which can report on government (in)action – may all be affected by disinformation to greater or lesser extents, as the riots in the US Capitol laid bare. It is unsurprising, then, that increasing attention is paid to the short- and longer-term effects of disinformation on eroding democracy and cementing polarisation. As has been noted, ‘If the public loses faith in what they hear and see and truth becomes a matter of opinion, then power flows to those whose opinions are most prominent – empowering authorities along the way’ (p. 1786). The Russian invasion of Ukraine casts in stark relief the consequences of this decay in truth and trust. In light of the scale and reach of disinformation, and its potential effects on our rights and freedoms, these broader societal impacts deserve significant attention and reflection.
From fuss to muss:
Disinformation may not be a new problem, but the rise of online platforms has allowed it to grow in scale and scope to unprecedented levels, posing a threat to individual rights and society as a whole. The range of spreaders, mediums and targets make any one solution unlikely to be a ‘silver bullet’. That is not to say nothing can be done, nor is it to suggest that multi-dimensional and varied approaches should not be pursued. On the contrary, many efforts are already underway: different actors – from states to civil society; platforms to social media users – are approaching the problem in different ways (as set out in this infographic). A mutually supporting combination of these approaches might be able to curb the spread of disinformation and limit its real-world harms, while ensuring that fundamental rights are respected. It will require a concerted effort which keeps as a cornerstone the need to respect the right to freedom of expression and balance it with sometimes competing rights like protections against discrimination. In order to keep the internet as a reliable source of information, and to safeguard the vibrancy of our public discourse, it’s undoubtedly a necessary and worthwhile endeavour.