Fundamental Rights in Out-of-Court Dispute Settlement under the Digital Services Act external link

Ruschemeier, H. & Quintais, J.
2025

Abstract

This paper argues that certified out-of-court dispute settlement (ODS) bodies under Article 21 of the Digital Services Act (DSA) should apply a structured fundamental rights review to platform content moderation, operationalised through the concept of case salience. Situating ODS within the DSA's broader regulatory architecture-particularly Articles 14(4), 17, and 20-the paper contends that Article 21 provides the procedural complement to Article 14(4)'s substantive duty to enforce terms of service "diligently, objectively and proportionately, with due regard to fundamental rights." Rather than extending the Charter of Fundamental Rights of the European Union (CFR) horizontally in a direct sense, ODS bodies give effect to Charter-conforming statutory obligations owed by platforms, interpreted in light of Article 52(1) CFR. Drawing on jurisprudence from the Court of Justice of the European Union (CJEU), the European Court of Human Rights (ECtHR), and national courts, the paper shows how freedom of expression and information interacts with countervailing rights-such as the freedom to conduct a business, privacy and data protection, and human dignity-in the context of online moderation. It proposes an intensity-of-review model: a deeper, meritsbased proportionality analysis for high-impact cases (e.g. political speech, account suspensions, issues of systemic relevance), and a lighter, procedural-sufficiency check for routine disputes. The paper emphasises that ODS remains non-judicial and operates without prejudice to Article 47 CFR and the availability of national court remedies. Over time, reasoned ODS decisions could evolve into a body of soft law, enhancing consistency and transparency in platform accountability. Ultimately, ODS bodies under the DSA represent a novel experiment in multi-actor rights protection. Their success will depend on whether they can reconcile accessibility, efficiency, and rights-based rigour, ensuring that content moderation in Europe evolves in line with the constitutional values of the Charter.

Content moderation, Digital Services Act (DSA), Fundamental rights

RIS

Save .RIS

Bibtex

Save .bib

Trading nuance for scale? Platform observability and content governance under the DSA external link

Papaevangelou, C. & Votta, F.
Internet Policy Review, vol. 14, iss. : 3, 2025

Abstract

The Digital Services Act (DSA) marks a paradigmatic shift in platform governance, introducing mechanisms like the Statement of Reasons (SoRs) database to foster transparency and observability of platforms’ content moderation practices. This study investigates the DSA Transparency Database as a regulatory mechanism for enabling observability, focusing on the automation and territorial application of content moderation across the EU/EEA. By analysing 439 million SoRs from eight Very Large Online Platforms (VLOPs), we find that the vast majority of content moderation decisions are enforced automatically and uniformly across the EU/EEA. We also identify significant discrepancies in content moderation strategies across VLOPs, with TikTok, YouTube and X exhibiting the most distinct practices, which are further analysed in the paper. Our findings reveal a strong correlation between automation and the speed of content moderation, automation and the territorial scope of decisions. We also highlight several limitations of the database, notably the lack of language-specific data and inconsistencies in how SoRs are reported by VLOPs. We conclude that despite such shortcomings, the DSA and its Transparency Database may enable a wider constellation of stakeholders to participate in platform governance, paving the way for more meaningful platform observability.

Content moderation, Digital Services Act (DSA), platform governance, Transparency

RIS

Save .RIS

Bibtex

Save .bib

Trust and Safety: What’s in a name? external link

Abstract

Trust and Safety teams often carry a vision that sets them apart from other units within the tech industry. Using Giddens' structuration theory and Kroeger's take on facework as a guiding lens, we try to understand whether T&S can serve as a bridge between platform logic and public interest, between self-regulation and state regulation, harm mitigation and accountability. We are drawing on insights from semi-structured interviews with T&S professionals and arrive at two main observations. First, institutional "facework" is largely absent in practice. T&S staff lack the visibility, resources, and authority to enact their role meaningfully. Second, many companies are deprioritizing T&S. If taken seriously, however, T&S must be embedded with product design, business models, and institutional accountability. If the focus of these departments becomes performative legal compliance and the outsourcing of activities to offshore locations and machines, an opportunity to protect users and a democratic discourse may be lost.

Content moderation, governance, Platforms, Social media, trust

RIS

Save .RIS

Bibtex

Save .bib

Trust and Safety in the Age of AI – the economics and practice of the platform-based discourse apparatus external link

Abstract

In recent years social media services emerged as key infrastructures for a plethora of societal conversations around politics, values, culture, science, and more. Through their Trust and Safety practices, they are playing a central role in shaping what their users may know, may believe in, what kinds of values, truths and untruths, or opinions they are exposed to. The rapid emergence of various tools, such as AI and the likes brought further complexities to how these societal conversations are conducted online. On the one hand, platforms started to heavily rely on automated tools and algorithmic agents to identify various forms of speech, some of them flagged for further human review, others being filtered automatically. On the other hand, cheap and ubiquitous access to generative AI systems also produce a flood of new speech on social media platforms. Content moderation and filtering, as one of the largest ‘Trust and Safety’ activities, is, on the surface, the most visible, and understandable activity which could protect users from all the harms stemming from ignorant or malicious actors in the online space. But, as we argue in this paper, content moderation is much more than that. Platforms, through their AI-human content moderation stack are ordering key societal discourses. The Foucauldian understanding of society emphasizes that discourse is knowledge is power: we know what the discourse reveals to us, and we use this knowledge as power to produce the world around us, render it legible through discourse. This logic, alongside the radically shifting rules of information economics, which reduced the cost of information to zero, challenges the old institutions, rules, procedures, discourses, and subsequent knowledge and power structures. In this paper, we first explore the practical realities of content moderation based on an expert interview study with Trust and Safety professionals, and a supporting document analysis, based on data published through the DSA Transparency Database. We reconstruct these empirical insights as an analytical model – a discourse apparatus stack – in the Foucauldian framework. This helps to identify the real systemic challenges content moderation faces, but fails to address.

Artificial intelligence, automated filtering, Content moderation, Foucault, information economics, Platforms, trust

RIS

Save .RIS

Bibtex

Save .bib

Generative AI, Copyright and the AI Act external link

Computer Law & Security Review, vol. 56, num: 106107, 2025

Abstract

This paper provides a critical analysis of the Artificial Intelligence (AI) Act's implications for the European Union (EU) copyright acquis, aiming to clarify the complex relationship between AI regulation and copyright law while identifying areas of legal ambiguity and gaps that may influence future policymaking. The discussion begins with an overview of fundamental copyright concerns related to generative AI, focusing on issues that arise during the input, model, and output stages, and how these concerns intersect with the text and data mining (TDM) exceptions under the Copyright in the Digital Single Market Directive (CDSMD). The paper then explores the AI Act's structure and key definitions relevant to copyright law. The core analysis addresses the AI Act's impact on copyright, including the role of TDM in AI model training, the copyright obligations imposed by the Act, requirements for respecting copyright law—particularly TDM opt-outs—and the extraterritorial implications of these provisions. It also examines transparency obligations, compliance mechanisms, and the enforcement framework. The paper further critiques the current regime's inadequacies, particularly concerning the fair remuneration of creators, and evaluates potential improvements such as collective licensing and bargaining. It also assesses legislative reform proposals, such as statutory licensing and AI output levies, and concludes with reflections on future directions for integrating AI governance with copyright protection.

AI Act, Content moderation, Copyright, Digital Services Act (DSA), Generative AI, Text and Data Mining (TDM), Transparency

RIS

Save .RIS

Bibtex

Save .bib

“Must-carry”, Special Treatment and Freedom of Expression on Online Platforms: A European Story

Kuczerawy, A. & Quintais, J.
2024

Abstract

This paper examines the role of "must-carry" obligations in the regulation of online platforms, arguing that these obligations are better understood as special treatment rules rather than direct analogues of traditional broadcasting regulation. By analysing the development of such rules within the European Union, particularly through the Digital Services Act (DSA) and the European Media Freedom Act (EMFA), the paper explores how these provisions aim to safeguard freedom of expression, ensure access to trustworthy information, enhance media pluralism, and regulate platform behaviour. The analysis extends to national-level laws and court decisions in Germany, The Netherlands, the United Kingdom, and Poland, illustrating how these countries have grappled with similar challenges in applying and contextualizing special treatment rules. Through a detailed examination of these frameworks, the paper critiques the risks of these rules, including their potential to entrench power imbalances, amplify state narratives, and complicate efforts to counter disinformation. Additionally, the paper highlights the broader implications of granting privileged status to legacy media and political actors, questioning whether such measures align with democratic principles and the rule of law. Ultimately, the paper argues that while these rules may offer a response to platform dominance, their implementation risks undermining the equality of speech and shifting the focus of freedom of expression toward a privilege for select groups. The paper is currently under peer review so please contact the authors for a copy of the preprint. We'll upload it again once the review is complete.

Content moderation, Digital Services Act (DSA), EU law, European Media Freedom Act, must carry, platform regulation

RIS

Save .RIS

Bibtex

Save .bib

What can a media privilege look like? Unpacking three versions in the EMFA download

Journal of Media Law, 2024

Abstract

The media privilege has been one of the most controversial aspects of the proposed European Media Freedom Act (EMFA). However, it is important not to assess the drawbacks of the media privilege in isolation, but in relation to the other available alternatives. In this comment, we lay out and critique how the European Parliament and Council build on the Commission’s proposal for a media privilege in the EMFA. We focus on three key questions: how is media content treated differently, who qualifies as media, and who decides who qualifies as media?

Content moderation, Media law, Platforms

RIS

Save .RIS

Bibtex

Save .bib

How the EU Outsources the Task of Human Rights Protection to Platforms and Users: The Case of UGC Monetization external link

Senftleben, M., Quintais, J. & Meiring, A.
Berkeley Technology Law Journal, vol. 38, iss. : 3, pp: 933-1010, 2024

Abstract

With the shift from the traditional safe harbor for hosting to statutory content filtering and licensing obligations, EU copyright law has substantially curtailed the freedom of users to upload and share their content creations. Seeking to avoid overbroad inroads into freedom of expression, EU law obliges online platforms and the creative industry to take into account human rights when coordinating their content filtering actions. Platforms must also establish complaint and redress procedures for users. The European Commission will initiate stakeholder dialogues to identify best practices. These “safety valves” in the legislative package, however, are mere fig leaves. Instead of safeguarding human rights, the EU legislator outsources human rights obligations to the platform industry. At the same time, the burden of policing content moderation systems is imposed on users who are unlikely to bring complaints in each individual case. The new legislative design in the EU will thus “conceal” human rights violations instead of bringing them to light. Nonetheless, the DSA rests on the same – highly problematic – approach. Against this background, the paper discusses the weakening – and potential loss – of fundamental freedoms as a result of the departure from the traditional notice-and-takedown approach. Adding a new element to the ongoing debate on content licensing and filtering, the analysis will devote particular attention to the fact that EU law, for the most part, has left untouched the private power of platforms to determine the “house rules” governing the most popular copyright-owner reaction to detected matches between protected works and content uploads: the (algorithmic) monetization of that content. Addressing the “legal vacuum” in the field of content monetization, the analysis explores outsourcing and concealment risks in this unregulated space. Focusing on large-scale platforms for user-generated content, such as YouTube, Instagram and TikTok, two normative problems come to the fore: (1) the fact that rightholders, when opting for monetization, de facto monetize not only their own rights but also the creative input of users; (2) the fact that user creativity remains unremunerated as long as the monetization option is only available to rightholders. As a result of this configuration, the monetization mechanism disregards users’ right to (intellectual) property and discriminates against user creativity. Against this background, we discuss whether the DSA provisions that seek to ensure transparency of content moderation actions and terms and conditions offer useful sources of information that could empower users. Moreover, we raise the question whether the detailed regulation of platform actions in the DSA may resolve the described human rights dilemmas to some extent.

Artificial intelligence, Content moderation, Copyright, derivative works, discrimination, Freedom of expression, Human rights, liability, proportionality, user-generated content

RIS

Save .RIS

Bibtex

Save .bib

Copyright Content Moderation in the European Union: State of the Art, Ways Forward and Policy Recommendations external link

Quintais, J., Katzenbach, C., Schwemer, S., Dergacheva, D., Riis, T., Mezei, P., Harkai, I. & Magalhães, J.C.
IIC, vol. 55, pp: 157-177, 2024

Abstract

This Opinion describes and summarises the results of the interdisciplinary research carried out by the authors during the course of a three-year project on intermediaries’ practices regarding copyright content moderation. This research includes the mapping of the EU legal framework and intermediaries’ practices regarding copyright content moderation, the evaluation and measuring of the impact of moderation practices and technologies on access and diversity, and a set of policy recommendations. Our recommendations touch on the following topics: the definition of “online content-sharing service provider”; the recognition and operationalisation of user rights; the complementary nature of complaint and redress safeguards; the scope of permissible preventive filtering; the clarification of the relationship between Art. 17 of the new Copyright Directive and the Digital Services Act; monetisation and restrictive content moderation actions; recommender systems and copyright content moderation; transparency and data access for researchers; trade secret protection and transparency of content moderation systems; the relationship between the copyright acquis, the Digital Services Act and the upcoming Artificial Intelligence Act; and human competences in copyright content moderation.

Content moderation, Copyright, Digital Services Act (DSA), Digital Single Market, intermediaries, Platforms

RIS

Save .RIS

Bibtex

Save .bib

Using Terms and Conditions to apply Fundamental Rights to Content Moderation

German Law Journal, 2023

Abstract

Large online platforms provide an unprecedented means for exercising freedom of expression online and wield enormous power over public participation in the online democratic space. However, it is increasingly clear that their systems, where (automated) content moderation decisions are taken based on a platformʼs terms and conditions (T&Cs), are fundamentally broken. Content moderation systems have been said to undermine freedom of expression, especially where important public interest speech ends up suppressed, such as speech by minority and marginalized groups. Indeed, these content moderation systems have been criticized for their overly vague rules of operation, inconsistent enforcement, and an overdependence on automation. Therefore, in order to better protect freedom of expression online, international human rights bodies and civil society organizations have argued that platforms “should incorporate directly” principles of fundamental rights law into their T&Cs. Under EU law, and apart from a rule in the Terrorist Content Regulation, platforms had until recently no explicit obligation to incorporate fundamental rights into their T&Cs. However, an important provision in the Digital Services Act (DSA) will change this. Crucially, Article 14 DSA lays down new rules on how platforms can enforce their T&Cs, including that platforms must have “due regard” to the “fundamental rights” of users under the EU Charter of Fundamental Rights. In this article, we critically examine the topic of enforceability of fundamental rights via T&Cs through the prism of Article 14 DSA. We ask whether this provision requires platforms to apply EU fundamental rights law and to what extent this may curb the power of Big Tech over online speech. We conclude that Article 14 will make it possible, in principle, to establish the indirect horizontal effect of fundamental rights in the relationship between online platforms and their users. But in order for the application and enforcement of T&Cs to take due regard of fundamental rights, Article 14 must be operationalized within the framework of the international and European fundamental rights standards. If this is possible Article 14 may fulfil its revolutionary potential.

Content moderation, Digital Services Act (DSA), Freedom of expression, Online platforms, platform regulation, terms and conditions

RIS

Save .RIS

Bibtex

Save .bib