Trading nuance for scale? Platform observability and content governance under the DSA external link

Papaevangelou, C. & Votta, F.
Internet Policy Review, vol. 14, iss. : 3, 2025

Abstract

The Digital Services Act (DSA) marks a paradigmatic shift in platform governance, introducing mechanisms like the Statement of Reasons (SoRs) database to foster transparency and observability of platforms’ content moderation practices. This study investigates the DSA Transparency Database as a regulatory mechanism for enabling observability, focusing on the automation and territorial application of content moderation across the EU/EEA. By analysing 439 million SoRs from eight Very Large Online Platforms (VLOPs), we find that the vast majority of content moderation decisions are enforced automatically and uniformly across the EU/EEA. We also identify significant discrepancies in content moderation strategies across VLOPs, with TikTok, YouTube and X exhibiting the most distinct practices, which are further analysed in the paper. Our findings reveal a strong correlation between automation and the speed of content moderation, automation and the territorial scope of decisions. We also highlight several limitations of the database, notably the lack of language-specific data and inconsistencies in how SoRs are reported by VLOPs. We conclude that despite such shortcomings, the DSA and its Transparency Database may enable a wider constellation of stakeholders to participate in platform governance, paving the way for more meaningful platform observability.

Content moderation, Digital Services Act (DSA), platform governance, Transparency

RIS

Save .RIS

Bibtex

Save .bib

Procedural Justice and Judicial AI; Substantiating Explainability Rights with the Values of Contestation external link

Metikoš, L. & Domselaar, I. van
2025

Abstract

The advent of opaque assistive AI in courtrooms has raised concerns about the contestability of these systems, and their impact on procedural justice. The right to an explanation under the GDPR and the AI Act could address the inscrutability of judicial AI for litigants. To substantiate this right in the domain of justice, we examine utilitarian, rights-based (including dignitarian and Dworkinian approaches), and relational theories of procedural justice. These theories reveal diverse perspectives on contestation, which can help shape explainability rights in the context of judicial AI. These theories respectively highlight different values of litigant contestation: it has instrumental value in error correction, and intrinsic value in respecting litigants' dignity, either as rational autonomous agents or as socio-relational beings. These insights help us answer three central and practical questions on how the right to an explanation should be operationalized to enable litigant contestation: should explanations be general or specific, to what extent do explanations need to be faithful to the system's actual behavior or merely provide a plausible approximation, and should more interpretable systems be used, even at the cost of accuracy? These questions are not strictly legal or technical in nature, but also rely on normative considerations. The practical operationalization of explainability will therefore differ between different valuations of litigant contestation of judicial AI.

Artificial intelligence, digital justice, Transparency

RIS

Save .RIS

Bibtex

Save .bib

Online Behavioural Advertising, Consumer Empowerment and Fair Competition: Are the DSA Transparency Obligations the Right Answer? download

Izyumenko, E., Senftleben, M., Schutte, N., Smit, E.G., Noort, G. van & Velzen, L. van
Journal of European Consumer and Market Law (EuCML), vol. 14, iss. : 2, pp: 46-59, 2025

Competition law, Consumer law, Digital Services Act (DSA), online behavioural advertising, Transparency

RIS

Save .RIS

Bibtex

Save .bib

Generative AI, Copyright and the AI Act external link

Computer Law & Security Review, vol. 56, num: 106107, 2025

Abstract

This paper provides a critical analysis of the Artificial Intelligence (AI) Act's implications for the European Union (EU) copyright acquis, aiming to clarify the complex relationship between AI regulation and copyright law while identifying areas of legal ambiguity and gaps that may influence future policymaking. The discussion begins with an overview of fundamental copyright concerns related to generative AI, focusing on issues that arise during the input, model, and output stages, and how these concerns intersect with the text and data mining (TDM) exceptions under the Copyright in the Digital Single Market Directive (CDSMD). The paper then explores the AI Act's structure and key definitions relevant to copyright law. The core analysis addresses the AI Act's impact on copyright, including the role of TDM in AI model training, the copyright obligations imposed by the Act, requirements for respecting copyright law—particularly TDM opt-outs—and the extraterritorial implications of these provisions. It also examines transparency obligations, compliance mechanisms, and the enforcement framework. The paper further critiques the current regime's inadequacies, particularly concerning the fair remuneration of creators, and evaluates potential improvements such as collective licensing and bargaining. It also assesses legislative reform proposals, such as statutory licensing and AI output levies, and concludes with reflections on future directions for integrating AI governance with copyright protection.

AI Act, Content moderation, Copyright, Digital Services Act (DSA), Generative AI, Text and Data Mining (TDM), Transparency

RIS

Save .RIS

Bibtex

Save .bib

How to design data access for researchers: A legal and software development perspective

Drunen, M. van & Noroozian, A.
Computer Law & Security Review, vol. 52, 2024

Abstract

Public scrutiny of platforms has been limited by a lack of transparency. In response, EU law increasingly requires platforms to provide data to researchers. The Digital Services Act and the proposed Regulation on the Transparency and Targeting of Political Advertising in particular require platforms to provide access to data through ad libraries and in response to data access requests. However, these obligations leave platforms considerable discretion to determine how access to data is provided. As the history of platforms’ self-regulated data access projects shows, the technical choices involved in designing data access significantly affect how researchers can use the provided data to scrutinise platforms. Ignoring the way data access is designed therefore creates a danger that platforms’ ability to limit research into their services simply shifts from controlling what data is available to researchers, to how data access is provided. This article explores how the Digital Services Act and proposed Political Advertising Regulation should be used to control the operationalisation of data access obligations that enable researchers to scrutinise platforms. It argues the operationalisation of data access regimes should not only be seen as a legal problem, but also as a software design problem. To that end it explores how software development principles may inform the operationalisation of data access obligations. The article closes by exploring the legal mechanisms available in the Digital Services Act and proposed Political Advertising Regulation to exercise control over the design of data access regimes, and makes five recommendations for ways in which these mechanisms should be used to enable research into platforms.

data access, Digital Services Act (DSA), Platforms, Transparency

RIS

Save .RIS

Bibtex

Save .bib

Personal Data Stores and the GDPR’s lawful grounds for processing personal data

Janssen, H., Cobbe, J., Norval, C. & Singh, J.
2019

Abstract

Personal Data Stores (‘PDSs’) entail users having a (physical or virtual) device within which they themselves can, in theory, capture, aggregate, and control the access to and the transfer of personal data. Their aim is to empower users in relation to their personal data, strengthening their opportunities for data protection, privacy, and/or to facilitate trade and monetisation. As PDS technologies develop, it is important to consider their role in relation to issues of data protection. The General Data Protection Regulation requires that the processing of user data be predicated on one of its defined lawful bases, whereby the Regulation does not favour any one basis over another. We explore how PDS architectures relate to these lawful bases, and observe that they tend to favour the bases that require direct user involvement. This paper considers issues that the envisaged architectural choices surrounding the lawful grounds may entail.

Data protection, decentralisation, lawful grounds for processing, personal data stores, Privacy, Transparency

RIS

Save .RIS

Bibtex

Save .bib

Shielding citizens? Understanding the impact of political advertisement transparency information

Dobber, T., Kruikemeier, S., Helberger, N. & Goodman, E.
New Media & Society, 2023

Abstract

Online targeted advertising leverages an information asymmetry between the advertiser and the recipient. Policymakers in the European Union and the United States aim to decrease this asymmetry by requiring information transparency information alongside political advertisements, in the hope of activating citizens’ persuasion knowledge. However, the proposed regulations all present different directions with regard to the required content of transparency information. Consequently, not all proposed interventions will be (equally) effective. Moreover, there is a chance that transparent information has additional consequences, such as increasing privacy concerns or decreasing advertising effectiveness. Using an online experiment (N = 1331), this study addresses these challenges and finds that two regulatory interventions (DSA and HAA) increase persuasion knowledge, while the chance of raising privacy concerns or lowering advertisement effectiveness is present but slim. Results suggest transparency information interventions have some promise, but at the same time underline the limitations of user-facing transparency interventions.

information disclosures, online advertising, persuasion knowledge, political attitudes, Privacy, Transparency

RIS

Save .RIS

Bibtex

Save .bib

An end to shadow banning? Transparency rights in the Digital Services Act between content moderation and curation download

Computer Law & Security Review, vol. 48, 2023

Abstract

This paper offers a legal perspective on the phenomenon of shadow banning: content moderation sanctions which are undetectable to those affected. Drawing on recent social science research, it connects current concerns about shadow banning to novel visibility management techniques in content moderation, such as delisting and demotion. Conventional moderation techniques such as outright content removal or account suspension can be observed by those affected, but these new visibility often cannot. This lends newfound significance to the legal question of moderation transparency rights. The EU Digital Services Act (DSA) is analysed in this light, as the first major legislation to regulate transparency of visibility remedies. In effect, its due process framework prohibits shadow banning with only limited exceptions. In doing so, the DSA surfaces tensions between two competing models for content moderation: as rule-bound administration or as adversarial security conflict. I discuss possible interpretations and trade-offs for this regime, and then turn to a more fundamental problem: how to define visibility reduction as a category of content moderation actions. The concept of visibility reduction or ‘demotions’ is central to both the shadow banning imaginary and to the DSA's safeguards, but its meaning is far from straightforward. Responding to claims that demotion is entirely relative, and therefore not actionable as a category of content moderation sanctions, I show how visibility reduction can still be regulated when defined as ex post adjustments to engagement-based relevance scores. Still, regulating demotion in this way will not cover all exercises of ranking power, since it manifests not only in individual cases of moderation but also through structural acts of content curation; not just by reducing visibility, but by producing visibility.

content curation, Content moderation, Digital Services Act (DSA), Online platforms, Transparency

RIS

Save .RIS

Bibtex

Save .bib

Algorithms Off-limits? If digital trade law restricts access to source code of software then accountability will suffer external link

2022

Abstract

Free trade agreements are increasingly used to construct an additional layer of protection for source code of software. This comes in the shape of a new prohibition for governments to require access to, or transfer of, source code of software, subject to certain exceptions. A clause on software source code is also part and parcel of an ambitious set of new rules on trade-related aspects of electronic commerce currently negotiated by 86 members of the World Trade Organization. Our understanding to date of how such a commitment inside trade law impacts on governments right to regulate digital technologies and the policy space that is allowed under trade law is limited. Access to software source code is for example necessary to meet regulatory and judicial needs in order to ensure that digital technologies are in conformity with individuals’ human rights and societal values. This article will analyze the implications of such a source code clause for current and future digital policies by governments that aim to ensure transparency, fairness and accountability of computer and machine learning algorithms.

accountability, algorithms, application programming interfaces, auditability, Digital trade, fairness, frontpage, source code, Transparency

RIS

Save .RIS

Bibtex

Save .bib

Panta Rhei: A European Perspective on Ensuring a High Level of Protection of Human Rights in a World in Which Everything Flows external link

Big Data and Global Trade Law, Cambridge University Press, 2021

Abstract

Human rights do remain valid currency in how we approach planetary-scale computation and accompanying data flows. Today’s system of human rights protection, however, is highly dependent on domestic legal institutions, which unravel faster than the reconstruction of fitting transnational governance institutions. The chapter takes a critical look at the construction of the data flow metaphor as a policy concept inside international trade law. Subsequently, it explores how the respect for human rights ties in with national constitutionalism that becomes increasingly challenged by the transnational dynamic of digital era transactions. Lastly, the chapter turns to international trade law and why its ambitions to govern cross-border data flows will likely not advance efforts to generate respect for human rights. In conclusion, the chapter advocates for a rebalancing act that recognizes human rights inside international trade law.

Artificial intelligence, EU law, frontpage, Human rights, Transparency, WTO law

RIS

Save .RIS

Bibtex

Save .bib