Leerssen, P. Seeing what others are seeing: Studies in the regulation of transparency for social media recommender systems 2023. @phdthesis{nokey,
title = {Seeing what others are seeing: Studies in the regulation of transparency for social media recommender systems},
author = {Leerssen, P.},
url = {https://dare.uva.nl/search?identifier=18c6e9a0-1530-4e70-b9a6-35fb37873d13},
year = {2023},
date = {2023-04-21},
abstract = {This dissertation asks how the law can shed light on social media recommender systems: the algorithmic tools which platforms use to rank and curate online content. Recommender systems fulfil an important gatekeeping function in social media governance, but their actions are poorly understood. Legal reforms are now underway in EU law to impose transparency rules on social media recommenders, and the goal of this dissertation is to interrogate the accountability relations implied by this regulatory project. What kinds of information is the law demanding about social media recommender systems? Who is included in these new models of accountability, and who is excluded?
This dissertation critiques a dominant paradigm in recent law and policy focused on algorithmic explanations. Building on insights from critical transparency studies and algorithm studies, it argues that disclosure regulation should move from algorithmic transparency toward platform observability: approaching recommenders not as discrete algorithmic artifacts but as complex sociotechnical systems shaped in important ways by their users and operators. Before any attempt to ‘open the black box’ of algorithmic machine learning, therefore, regulating for observability invites us to ask how recommenders find uptake in practice; to demand basic data on recommender inputs, outputs and interventions; to ask what is being recommend, sooner than why.
Several avenues for observability regulation are explored, including platform ad archives; notices for visibility restrictions (or ‘shadow bans’); and researcher APIs. Through solutions such as these, which render visible recommender outcomes, this dissertation outlines a vision for a more democratic media governance\textemdashone which supports informed and inclusive deliberation about, across and within social media’s personalised publics.},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
This dissertation asks how the law can shed light on social media recommender systems: the algorithmic tools which platforms use to rank and curate online content. Recommender systems fulfil an important gatekeeping function in social media governance, but their actions are poorly understood. Legal reforms are now underway in EU law to impose transparency rules on social media recommenders, and the goal of this dissertation is to interrogate the accountability relations implied by this regulatory project. What kinds of information is the law demanding about social media recommender systems? Who is included in these new models of accountability, and who is excluded?
This dissertation critiques a dominant paradigm in recent law and policy focused on algorithmic explanations. Building on insights from critical transparency studies and algorithm studies, it argues that disclosure regulation should move from algorithmic transparency toward platform observability: approaching recommenders not as discrete algorithmic artifacts but as complex sociotechnical systems shaped in important ways by their users and operators. Before any attempt to ‘open the black box’ of algorithmic machine learning, therefore, regulating for observability invites us to ask how recommenders find uptake in practice; to demand basic data on recommender inputs, outputs and interventions; to ask what is being recommend, sooner than why.
Several avenues for observability regulation are explored, including platform ad archives; notices for visibility restrictions (or ‘shadow bans’); and researcher APIs. Through solutions such as these, which render visible recommender outcomes, this dissertation outlines a vision for a more democratic media governance—one which supports informed and inclusive deliberation about, across and within social media’s personalised publics. |
Leerssen, P. An end to shadow banning? Transparency rights in the Digital Services Act between content moderation and curation In: Computer Law & Security Review, vol. 48, 2023. @article{nokey,
title = {An end to shadow banning? Transparency rights in the Digital Services Act between content moderation and curation},
author = {Leerssen, P.},
url = {https://www.ivir.nl/endtoshadowbanning/},
doi = {10.1016/j.clsr.2023.105790},
year = {2023},
date = {2023-04-11},
urldate = {2023-04-11},
journal = {Computer Law \& Security Review},
volume = {48},
abstract = {This paper offers a legal perspective on the phenomenon of shadow banning: content moderation sanctions which are undetectable to those affected. Drawing on recent social science research, it connects current concerns about shadow banning to novel visibility management techniques in content moderation, such as delisting and demotion. Conventional moderation techniques such as outright content removal or account suspension can be observed by those affected, but these new visibility often cannot. This lends newfound significance to the legal question of moderation transparency rights. The EU Digital Services Act (DSA) is analysed in this light, as the first major legislation to regulate transparency of visibility remedies. In effect, its due process framework prohibits shadow banning with only limited exceptions. In doing so, the DSA surfaces tensions between two competing models for content moderation: as rule-bound administration or as adversarial security conflict. I discuss possible interpretations and trade-offs for this regime, and then turn to a more fundamental problem: how to define visibility reduction as a category of content moderation actions. The concept of visibility reduction or ‘demotions’ is central to both the shadow banning imaginary and to the DSA's safeguards, but its meaning is far from straightforward. Responding to claims that demotion is entirely relative, and therefore not actionable as a category of content moderation sanctions, I show how visibility reduction can still be regulated when defined as ex post adjustments to engagement-based relevance scores. Still, regulating demotion in this way will not cover all exercises of ranking power, since it manifests not only in individual cases of moderation but also through structural acts of content curation; not just by reducing visibility, but by producing visibility.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
This paper offers a legal perspective on the phenomenon of shadow banning: content moderation sanctions which are undetectable to those affected. Drawing on recent social science research, it connects current concerns about shadow banning to novel visibility management techniques in content moderation, such as delisting and demotion. Conventional moderation techniques such as outright content removal or account suspension can be observed by those affected, but these new visibility often cannot. This lends newfound significance to the legal question of moderation transparency rights. The EU Digital Services Act (DSA) is analysed in this light, as the first major legislation to regulate transparency of visibility remedies. In effect, its due process framework prohibits shadow banning with only limited exceptions. In doing so, the DSA surfaces tensions between two competing models for content moderation: as rule-bound administration or as adversarial security conflict. I discuss possible interpretations and trade-offs for this regime, and then turn to a more fundamental problem: how to define visibility reduction as a category of content moderation actions. The concept of visibility reduction or ‘demotions’ is central to both the shadow banning imaginary and to the DSA's safeguards, but its meaning is far from straightforward. Responding to claims that demotion is entirely relative, and therefore not actionable as a category of content moderation sanctions, I show how visibility reduction can still be regulated when defined as ex post adjustments to engagement-based relevance scores. Still, regulating demotion in this way will not cover all exercises of ranking power, since it manifests not only in individual cases of moderation but also through structural acts of content curation; not just by reducing visibility, but by producing visibility. |
Leerssen, P. Annotatie bij Rb Noord-Holland 6 oktober 2021 (Kamerlid / LinkedIn Ierland & LinkedIn Nederland) In: Computerrecht, iss. 3, no. 97, pp. 228-230, 2022. @article{nokey,
title = {Annotatie bij Rb Noord-Holland 6 oktober 2021 (Kamerlid / LinkedIn Ierland \& LinkedIn Nederland)},
author = {Leerssen, P.},
url = {https://www.ivir.nl/annotatie_computerrecht_2022_97/},
year = {2022},
date = {2022-06-16},
journal = {Computerrecht},
number = {97},
issue = {3},
pages = {228-230},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
|
Leerssen, P. Platform ad archives in Article 30 DSA In: DSA Observatory blog, 2021. @article{Leerssen2021,
title = {Platform ad archives in Article 30 DSA},
author = {Leerssen, P.},
url = {https://dsa-observatory.eu/2021/05/25/platform-ad-archives-in-article-30-dsa/},
year = {2021},
date = {2021-05-25},
journal = {DSA Observatory blog},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
|
Ausloos, J., Leerssen, P., Menezes Cwajg, C. Transparency Rules in Online Political Advertising: Mapping Global Law and Policy 2020, (This report has been prepared by Carolina Menezes Cwajg. It was written under the academic guidance of Dr. Jef Ausloos and Paddy Leerssen, at IViR and the Information, Communication & the Data Society (ICDS) Initiative at the University of Amsterdam.). @techreport{Cwajg2020,
title = {Transparency Rules in Online Political Advertising: Mapping Global Law and Policy},
author = {Menezes Cwajg, C. and Ausloos, J. and Leerssen, P.},
url = {https://www.ivir.nl/publicaties/download/TransparencyRulesOnlinePoliticalAds2020.pdf},
year = {2020},
date = {2020-10-13},
abstract = {In response to the rise of online political microtargeting, governments across the globe are launching transparency initiatives. Most of these aim to shed light on who is buying targeted political ads, and how they are targeted. The present Report offers a comprehensive mapping exercise of this new field of regulation, analysing new laws, proposed or enacted, that impose transparency rules on online political microtargeting.
The Report consists of two components: a global overview, and detailed case study of the United States. The first section begins with a geographical overview by showing where and what initiatives were proposed and enacted, looking in particular at Canada, France, Ireland, Singapore and the United States. It then unpacks these initiatives in greater detail by outlining what requirements they impose in terms of disclosure content, scope of application, and format. The second section of the Report then zooms into the United States, outlining the various initiatives that have been proposed and enacted at state-level.},
note = {This report has been prepared by Carolina Menezes Cwajg. It was written under the academic guidance of Dr. Jef Ausloos and Paddy Leerssen, at IViR and the Information, Communication \& the Data Society (ICDS) Initiative at the University of Amsterdam.},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
In response to the rise of online political microtargeting, governments across the globe are launching transparency initiatives. Most of these aim to shed light on who is buying targeted political ads, and how they are targeted. The present Report offers a comprehensive mapping exercise of this new field of regulation, analysing new laws, proposed or enacted, that impose transparency rules on online political microtargeting.
The Report consists of two components: a global overview, and detailed case study of the United States. The first section begins with a geographical overview by showing where and what initiatives were proposed and enacted, looking in particular at Canada, France, Ireland, Singapore and the United States. It then unpacks these initiatives in greater detail by outlining what requirements they impose in terms of disclosure content, scope of application, and format. The second section of the Report then zooms into the United States, outlining the various initiatives that have been proposed and enacted at state-level. |
Ausloos, J., Leerssen, P., Thije, P. ten Operationalizing Research Access in Platform Governance: What to learn from other industries? 2020. @techreport{Ausloos2020b,
title = {Operationalizing Research Access in Platform Governance: What to learn from other industries?},
author = {Ausloos, J. and Leerssen, P. and Thije, P. ten},
url = {https://www.ivir.nl/publicaties/download/GoverningPlatforms_IViR_study_June2020-AlgorithmWatch-2020-06-24.pdf},
year = {2020},
date = {2020-06-25},
abstract = {A new study published by AlgorithmWatch, in cooperation with the European Policy Centre and the University of Amsterdam’s Institute for Information Law, shows that the GDPR needn’t stand in the way of meaningful research access to platform data; looks to health and environmental sectors for best practices in privacy-respecting data sharing frameworks.},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
A new study published by AlgorithmWatch, in cooperation with the European Policy Centre and the University of Amsterdam’s Institute for Information Law, shows that the GDPR needn’t stand in the way of meaningful research access to platform data; looks to health and environmental sectors for best practices in privacy-respecting data sharing frameworks. |
Appelman, N., Fahy, R., Helberger, N., Leerssen, P., McGonagle, T., van Eijk, N., van Hoboken, J. Het juridisch kader voor de verspreiding van desinformatie via internetdiensten en de regulering van politieke advertenties 2020, (Rapport voor het ministerie van Binnenlandse Zaken en Koninkrijksrelaties, Amsterdam, december 2019). @techreport{vanHoboken2020b,
title = {Het juridisch kader voor de verspreiding van desinformatie via internetdiensten en de regulering van politieke advertenties},
author = {van Hoboken, J. and Appelman, N. and Fahy, R. and Leerssen, P. and McGonagle, T. and van Eijk, N. and Helberger, N.},
url = {https://www.ivir.nl/publicaties/download/Rapport_desinformatie_december2019.pdf
https://www.ivir.nl/publicaties/download/Kamerbrief_desinformatie.pdf},
year = {2020},
date = {2020-05-14},
abstract = {Het onderzoek, uitgevoerd in opdracht van het Ministerie van Binnenlandse Zaken en Koninkrijksrelaties, analyseert het juridisch kader van toepassing op de verspreiding van desinformatie via online diensten. Het rapport biedt een uitgebreid overzicht van de relevante Europese en Nederlandse normen en doet aanbevelingen voor de verbetering van dit juridisch kader. Het onderzoek bevat daarnaast ook een analyse van het relevant wettelijke kader in de V.S., het V.K, Frankrijk, Duitsland, Canada en Zweden.
Het rapport maakt duidelijk hoe de vrijheid van meningsuiting als rode draad door het wettelijke kader loopt. Dit fundamentele recht vormt zowel de buitenste grens voor regulering als een basis voor nieuwe maatregelen, bijvoorbeeld voor de bescherming van pluralisme. Het wettelijk kader van toepassing op desinformatie blijkt zeer breed, bevat verschillende reguleringsniveaus, verschuift afhankelijk van de specifieke context en omvat vele al bestaande normen voor de regulering van specifieke typen desinformatie. Verder blijkt het toezicht op dit wettelijk kader vrij gefragmenteerd te zijn. Op basis van deze analyse komt het rapport tot aan aantal aanbevelingen. De aanbevelingen hebben onder andere betrekking op het gebruik van de term desinformatie als beleidsterm, het omgaan met de spanningen op de verschillende beleidsniveaus, de regulering van internettussenpersonen door middel van transparantie verplichtingen en de samenwerking tussen de verschillende toezichthouders.
Voorafgaand aan deze eindrapportage is in eind 2019 het interim-rapport gepubliceerd. Dit rapport focuste op de relatie tussen desinformatie en online politieke advertenties. Beide studies zijn onderdeel van het onderzoeksproject ‘Digital Transition of Decision-Making at the Faculty of Law of the University of Amsterdam’ dat zich buigt over vraagstukken gerelateerd aan kunstmatige intelligentie en publieke waarden, data governance, en online platforms. },
note = {Rapport voor het ministerie van Binnenlandse Zaken en Koninkrijksrelaties, Amsterdam, december 2019},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Het onderzoek, uitgevoerd in opdracht van het Ministerie van Binnenlandse Zaken en Koninkrijksrelaties, analyseert het juridisch kader van toepassing op de verspreiding van desinformatie via online diensten. Het rapport biedt een uitgebreid overzicht van de relevante Europese en Nederlandse normen en doet aanbevelingen voor de verbetering van dit juridisch kader. Het onderzoek bevat daarnaast ook een analyse van het relevant wettelijke kader in de V.S., het V.K, Frankrijk, Duitsland, Canada en Zweden.
Het rapport maakt duidelijk hoe de vrijheid van meningsuiting als rode draad door het wettelijke kader loopt. Dit fundamentele recht vormt zowel de buitenste grens voor regulering als een basis voor nieuwe maatregelen, bijvoorbeeld voor de bescherming van pluralisme. Het wettelijk kader van toepassing op desinformatie blijkt zeer breed, bevat verschillende reguleringsniveaus, verschuift afhankelijk van de specifieke context en omvat vele al bestaande normen voor de regulering van specifieke typen desinformatie. Verder blijkt het toezicht op dit wettelijk kader vrij gefragmenteerd te zijn. Op basis van deze analyse komt het rapport tot aan aantal aanbevelingen. De aanbevelingen hebben onder andere betrekking op het gebruik van de term desinformatie als beleidsterm, het omgaan met de spanningen op de verschillende beleidsniveaus, de regulering van internettussenpersonen door middel van transparantie verplichtingen en de samenwerking tussen de verschillende toezichthouders.
Voorafgaand aan deze eindrapportage is in eind 2019 het interim-rapport gepubliceerd. Dit rapport focuste op de relatie tussen desinformatie en online politieke advertenties. Beide studies zijn onderdeel van het onderzoeksproject ‘Digital Transition of Decision-Making at the Faculty of Law of the University of Amsterdam’ dat zich buigt over vraagstukken gerelateerd aan kunstmatige intelligentie en publieke waarden, data governance, en online platforms. |
Appelman, N., Fahy, R., Helberger, N., Leerssen, P., McGonagle, T., van Eijk, N., van Hoboken, J. The legal framework on the dissemination of disinformation through Internet services and the regulation of political advertising 2020, (A report for the Ministry of the Interior and Kingdom Relations, Amsterdam, December 2019). @techreport{vanHoboken2020c,
title = {The legal framework on the dissemination of disinformation through Internet services and the regulation of political advertising},
author = {van Hoboken, J. and Appelman, N. and Fahy, R. and Leerssen, P. and McGonagle, T. and van Eijk, N. and Helberger, N.},
url = {https://www.ivir.nl/publicaties/download/Report_Disinformation_Dec2019-1.pdf},
year = {2020},
date = {2020-05-14},
abstract = {The study, commissioned by the Dutch government, focusses on the legal framework governing the dissemination of disinformation, in particular through Internet services. The study provides an extensive overview of relevant European and Dutch legal norms relating to the spread of online disinformation, and recommendations are given on how to improve this framework. Additionally, the study includes an analysis of the relevant legal framework in 6 different countries (U.K., U.S., France, Germany, Sweden and Canada).
The report makes clear how the freedom of expression runs as a central theme through the legal framework, both forming the outer limit for possible regulation and a legal basis to create new regulation (e.g. protecting pluralism). The legal framework governing disinformation online is shown to be very broad, encompassing different levels of regulation, shifting depending on the context and already regulating many different types of disinformation. Further, oversight seems to be fragmented with many different supervisory authorities involved but limited cooperation. Based on this analysis, the report offers several recommendations, such as on the use of disinformation not as a legal term but a policy term, on negotiating the tensions on the different policy levels, on the regulation of internet intermediaries including transparency obligations and on increased cooperation between the relevant supervisory authorities.
Previously, the interim report focussing on political advertising was published in late 2019. Both these studies have been carried out in the context of the research initiative on the Digital Transition of Decision-Making at the Faculty of Law of the University of Amsterdam, focussing on questions related to AI and public values, data governance and online platforms.},
note = {A report for the Ministry of the Interior and Kingdom Relations, Amsterdam, December 2019},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
The study, commissioned by the Dutch government, focusses on the legal framework governing the dissemination of disinformation, in particular through Internet services. The study provides an extensive overview of relevant European and Dutch legal norms relating to the spread of online disinformation, and recommendations are given on how to improve this framework. Additionally, the study includes an analysis of the relevant legal framework in 6 different countries (U.K., U.S., France, Germany, Sweden and Canada).
The report makes clear how the freedom of expression runs as a central theme through the legal framework, both forming the outer limit for possible regulation and a legal basis to create new regulation (e.g. protecting pluralism). The legal framework governing disinformation online is shown to be very broad, encompassing different levels of regulation, shifting depending on the context and already regulating many different types of disinformation. Further, oversight seems to be fragmented with many different supervisory authorities involved but limited cooperation. Based on this analysis, the report offers several recommendations, such as on the use of disinformation not as a legal term but a policy term, on negotiating the tensions on the different policy levels, on the regulation of internet intermediaries including transparency obligations and on increased cooperation between the relevant supervisory authorities.
Previously, the interim report focussing on political advertising was published in late 2019. Both these studies have been carried out in the context of the research initiative on the Digital Transition of Decision-Making at the Faculty of Law of the University of Amsterdam, focussing on questions related to AI and public values, data governance and online platforms. |
Appelman, N., Fahy, R., Helberger, N., Leerssen, P., McGonagle, T., van Eijk, N., van Hoboken, J. De verspreiding van desinformatie via internetdiensten en de regulering van politieke advertenties 2019, (Tussenrapportage oktober 2019). @techreport{vanHoboken2019c,
title = {De verspreiding van desinformatie via internetdiensten en de regulering van politieke advertenties},
author = {van Hoboken, J. and Appelman, N. and Fahy, R. and Leerssen, P. and McGonagle, T. and van Eijk, N. and Helberger, N.},
url = {https://www.ivir.nl/publicaties/download/verspreiding_desinformatie_internetdiensten_tussenrapportage.pdf},
year = {2019},
date = {2019-10-31},
abstract = {Rapport in opdracht van het Ministerie van Binnenlandse Zaken en Koninkrijksrelaties, bijlage bij Kamerstuk 2019-2020, 30821, nr. 91, Tweede Kamer.},
note = {Tussenrapportage oktober 2019},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Rapport in opdracht van het Ministerie van Binnenlandse Zaken en Koninkrijksrelaties, bijlage bij Kamerstuk 2019-2020, 30821, nr. 91, Tweede Kamer. |
Ausloos, J., Helberger, N., Leerssen, P., Vreese, C.H. de, Zarouali, B. Platform ad archives: promises and pitfalls In: Internet Policy Review, vol. 8, no. 4, 2019. @article{Leerssen2019b,
title = { Platform ad archives: promises and pitfalls},
author = {Leerssen, P. and Ausloos, J. and Zarouali, B. and Helberger, N. and Vreese, C.H. de},
url = {https://policyreview.info/articles/analysis/platform-ad-archives-promises-and-pitfalls},
doi = {10.14763/2019.4.1421},
year = {2019},
date = {2019-10-10},
journal = {Internet Policy Review},
volume = {8},
number = {4},
abstract = {This paper discusses the new phenomenon of platform ad archives. Over the past year, leading social media platforms have installed publicly accessible databases documenting their political advertisements, and several countries have moved to regulate them. If designed and implemented properly, ad archives can correct for structural informational asymmetries in the online advertising industry, and thereby improve accountability through litigation and through publicity. However, present implementations leave much to be desired. We discuss key criticisms, suggest several improvements and identify areas for future research and debate.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
This paper discusses the new phenomenon of platform ad archives. Over the past year, leading social media platforms have installed publicly accessible databases documenting their political advertisements, and several countries have moved to regulate them. If designed and implemented properly, ad archives can correct for structural informational asymmetries in the online advertising industry, and thereby improve accountability through litigation and through publicity. However, present implementations leave much to be desired. We discuss key criticisms, suggest several improvements and identify areas for future research and debate. |
Drunen, M. van, Helberger, N., Leerssen, P. Germany proposes Europe's first diversity rules for social media platforms In: LSE Media Policy Project Blog, vol. 2019, 2019. @article{Helberger2019,
title = {Germany proposes Europe's first diversity rules for social media platforms},
author = {Helberger, N. and Leerssen, P. and Drunen, M. van},
url = {https://blogs.lse.ac.uk/mediapolicyproject/2019/05/29/germany-proposes-europes-first-diversity-rules-for-social-media-platforms/},
year = {2019},
date = {2019-06-06},
journal = {LSE Media Policy Project Blog},
volume = {2019},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
|
Leerssen, P., Tworek, H., An Analysis of Germany's NetzDG Law In: 2019, ( First working paper of the Transatlantic High Level Working Group on Content Moderation Online and Freedom of Expression). @article{Tworek2019,
title = {An Analysis of Germany's NetzDG Law},
author = {Tworek, H., and Leerssen, P.},
url = {https://www.ivir.nl/publicaties/download/NetzDG_Tworek_Leerssen_April_2019.pdf},
year = {2019},
date = {2019-04-18},
note = { First working paper of the Transatlantic High Level Working Group on Content Moderation Online and Freedom of Expression},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
|
Leerssen, P., Maalderink, R., McGonagle, T. Compilation of selected best practices for the implementation of Recommendation CM/Rec(2016)4 & Proposals for further follow-up activities 2016. @techreport{McGonagle2016c,
title = {Compilation of selected best practices for the implementation of Recommendation CM/Rec(2016)4 \& Proposals for further follow-up activities},
author = {McGonagle, T. and Leerssen, P. and Maalderink, R.},
url = {https://rm.coe.int/selected-best-practices-for-implementation-of-cm-rec-2016-4/1680726c3b},
year = {2016},
date = {2016-08-11},
abstract = {Report commissioned by the Council of Europe, 11 August 2016, 24 pp.},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Report commissioned by the Council of Europe, 11 August 2016, 24 pp. |
Leerssen, P. Cut Out By The Middle Man: The Free Speech Implications Of Social Network Blocking and Banning In The EU In: JIPITEC, vol. 6, no. 2, 2015. @article{Leerssen2019,
title = {Cut Out By The Middle Man: The Free Speech Implications Of Social Network Blocking and Banning In The EU},
author = {Leerssen, P.},
url = {https://www.jipitec.eu/issues/jipitec-6-2-2015/4271/leerssen.pdf},
year = {2015},
date = {2015-10-10},
journal = {JIPITEC},
volume = {6},
number = {2},
abstract = {This article examines social network users’ legal defences against content removal under the EU and ECHR frameworks, and their implications for the effective exercise of free speech online. A review of the Terms of Use and content moderation policies of two major social network services, Facebook and Twitter, shows that end users are unlikely to have a contractual defence against content removal. Under the EU and ECHR frameworks, they may demand the observance of free speech principles in state-issued blocking orders and their implementation by intermediaries, but cannot invoke this ‘fair balance’ test against the voluntary removal decisions by the social network service. Drawing on practical examples, this article explores the threat to free speech created by this lack of accountability: Firstly, a shift from legislative regulation and formal injunctions to public-private collaborations allows state authorities to influence these ostensibly voluntary policies, thereby circumventing constitutional safeguards. Secondly, even absent state interference, the commercial incentives of social media cannot be guaranteed to coincide with democratic ideals. In light of the blurring of public and private functions in the regulation of social media expression, this article calls for the increased accountability of the social media services towards end users regarding the observance of free speech principles.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
This article examines social network users’ legal defences against content removal under the EU and ECHR frameworks, and their implications for the effective exercise of free speech online. A review of the Terms of Use and content moderation policies of two major social network services, Facebook and Twitter, shows that end users are unlikely to have a contractual defence against content removal. Under the EU and ECHR frameworks, they may demand the observance of free speech principles in state-issued blocking orders and their implementation by intermediaries, but cannot invoke this ‘fair balance’ test against the voluntary removal decisions by the social network service. Drawing on practical examples, this article explores the threat to free speech created by this lack of accountability: Firstly, a shift from legislative regulation and formal injunctions to public-private collaborations allows state authorities to influence these ostensibly voluntary policies, thereby circumventing constitutional safeguards. Secondly, even absent state interference, the commercial incentives of social media cannot be guaranteed to coincide with democratic ideals. In light of the blurring of public and private functions in the regulation of social media expression, this article calls for the increased accountability of the social media services towards end users regarding the observance of free speech principles. |