Comparing the Right to an Explanation of Judicial AI by Function: Studies on the EU, Brazil, and China external link

Metikoš, L., Iglesias Keller, C., Qiao, C. & Helberger, N.
pp: 31, 2025

Abstract

Courts across the world are increasingly adopting AI to automate various tasks. But, the opacity of judicial AI systems can hinder the ability of litigants to contest vital pieces of evidence and legal observations. One proposed remedy for the inscrutability of judicial AI has been the right to an explanation. This paper provides an analysis of the scope and contents of a right to an explanation of judicial AI in the EU, Brazil, and China. We argue that such a right needs to take into account that judicial AI can perform widely different functions. We provide a classification of these functions, ranging from ancillary to impactful tasks. We subsequently compare, by function, how judicial AI would need to be explained under due process standards, Data Protection Law, and AI regulation in the EU, Brazil, and China. We find that due process standards provide a broad normative basis for a derived right to an explanation. But, these standards do not sufficiently clarify the scope and content of such a right. Data Protection Law and AI regulations contain more explicitly formulated rights to an explanation that also apply to certain judicial AI systems. Nevertheless, they often exclude impactful functions of judicial AI from their scope. Within these laws there is also a lack of guidance as to what explainability substantively entails. Ultimately, this patchwork of legal frameworks suggests that the protection of litigant contestation is still incomplete.

Artificial intelligence, digital justice, right to an explanation

RIS

Save .RIS

Bibtex

Save .bib

Article 3: The Untapped Legal Basis for Europe’s Public AI Ambitions external link

Kluwer Copyright Blog, 2025

Artificial intelligence, CDSM Directive, Copyright, exceptions and limitations, Text and Data Mining (TDM)

RIS

Save .RIS

Bibtex

Save .bib

A Procedural Sedative: The GDPR’s Right to an Explanation download

Data, Cybersecurity and Privacy (DCSP), iss. : 18&19, pp: 24-26, 2025

Abstract

What remedies do you have when AI errs, when it discriminates, or harms you in some other way? How can we hold organizations accountable when they cause people harm during the development, distribution, or use of AI? Arguably, the first step is understanding how the system in question works. To this end, the right to an explanation, provided in EU law under the GDPR and the AI Act, is one of the most important remedies individuals have to contest AI.

AI Act, Artificial intelligence, GDPR

RIS

Save .RIS

Bibtex

Save .bib

Win-Win: How to Remove Copyright Obstacles to AI Training While Ensuring Author Remuneration (and Why the AI Act Fails to do the Magic) external link

Chicago-Kent Law Review, vol. 100, iss. : 1, pp: 7-55,

Abstract

In the debate on AI training and copyright, the focus is often on the use of protected works during the AI training phase (input perspective). To reconcile training objectives with authors' fair remuneration interest, however, it is advisable to adopt an output perspective and focus on literary and artistic productions generated by fully-trained AI systems that are offered in the marketplace. Implementing output-based remuneration systems, lawmakers can establish a legal framework that supports the development of unbiased, high quality AI models while, at the same time, ensuring that authors receive a fair remuneration for the use of literary and artistic works for AI training purposes – a fair remuneration that softens displacement effects in the market for literary and artistic creations where human authors face shrinking market share and loss of income. Instead of imposing payment obligations and administrative burdens on AI developers during the AI training phase, output-based remuneration systems offer the chance of giving AI trainers far-reaching freedom. Without exposing AI developers to heavy administrative and financial burdens, lawmakers can permit the use of the full spectrum of human literary and artistic resources. Once fully developed AI systems are brought to the market, however, providers of these systems are obliged to compensate authors for the unbridled freedom to use human creations during the AI training phase and displacement effects caused by AI systems that are capable of mimicking human literary and artistic works. As the analysis shows, the input-based remuneration approach in the EU – with rights reservations and complex transparency rules blocking access to AI training resources – is likely to reduce the attractiveness of the EU as a region for AI development. Moreover, the regulatory barriers posed by EU copyright law and the AI Act may marginalize the messages and values conveyed by European cultural expressions in AI training datasets and AI output. Considering the legal and practical difficulties resulting from the EU approach, lawmakers in other regions should refrain from following the EU model. As an alternative, they should explore output-based remuneration mechanisms. In contrast to the burdensome EU system that requires the payment of remuneration for access to human AI training resources, an output-based approach does not weaken the position of the domestic high-tech sector: AI developers are free to use human creations as training material. Once fully developed AI systems are offered in the marketplace, all providers of AI systems capable of producing literary and artistic output are subject to the same payment obligation and remuneration scheme – regardless of whether they are local or foreign companies. The advantages of this alternative approach are evident. Offering broad freedom to use human creations for AI training, an output-based approach is conducive to AI development. It also bans the risk of marginalizing the messages and values conveyed by a country’s literary and artistic expressions.

Artificial intelligence, Copyright, remuneration

RIS

Save .RIS

Bibtex

Save .bib

Copyright and the Expression Engine: Idea and Expression in AI-Assisted Creations external link

Chicago-Kent Law Review, vol. 100, iss. : 1, pp: 251-264, 2025

Artificial intelligence, Copyright

RIS

Save .RIS

Bibtex

Save .bib

Public consultation on the legislative proposal for a Cloud and AI Development Act download

2025

Abstract

I welcome the opportunity to respond to the European Commissions public consultation on the proposed AI Cloud and Development Act. I make this submission in my capacity as an academic with expertise in European Union (EU) data centres sustainability policy. In my submission I will address the following issues: 1. EUs sustainability commitments in relation to digitalisation and, in particular, data centres; 2. The requirements for data centre sector sustainability; and 3. The case for public value-oriented digital infrastructure.

Artificial intelligence

RIS

Save .RIS

Bibtex

Save .bib

A nightmare to control: Legal and organizational challenges around the procurement of journalistic AI from external technology providers external link

Piasecki, S. & Helberger, N.
The Information Society, vol. 41, iss. : 3, pp: 173-194, 2025

Abstract

Little research has explored the process of procuring AI systems in the media from the perspective of contractual terms and conditions. It’s importance is underscored by the emerging regulatory framework coming from Brussels that embraces private ordering through mechanisms such as negotiations, instructions, and standardization. This article addresses the following research questions: How are journalistic AI procurement processes perceived by professionals? What are the practical and legal obstacles experienced in negotiating procurement conditions? How to improve media organizations’ contractual negotiation power? The study is grounded in 12 semi-structured interviews with members of media organizations (lawyers, technologists, and managers) and an analysis of 16 terms and conditions of companies providing AI systems. Based on its findings, it strives to propose a contractual counter-power for (especially smaller and local) media actors interested in using journalistic AI.

Artificial intelligence, Journalism

RIS

Save .RIS

Bibtex

Save .bib

Editorial: Interdisciplinary Perspectives on the (Un)fairness of Artificial Intelligence external link

Starke, C., Blanke, T., Helberger, N., Smets, S. & Vreese, C.H. de
Minds and Machines, vol. 35, num: 22, 2025

Artificial intelligence

RIS

Save .RIS

Bibtex

Save .bib

Dangerous Criminals and Beautiful Prostitutes? Investigating Harmful Representations in Dutch Language Models external link

Lin, Z., Trogrlić, G., Vreese, C.H. de & Helberger, N.
FAccT '25: Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency, pp: 11005 - 1014, 2025

Abstract

While language-based AI is becoming increasingly popular, ensuring that these systems are socially responsible is essential. Despite their growing impact, large language models (LLMs), the engines of many language-driven applications, remain largely in the black box. Concerns about LLMs reinforcing harmful representations are shared by academia, industries, and the public. In professional contexts, researchers rely on LLMs for computational tasks such as text classification and contextual prediction, during which the risk of perpetuating biases cannot be overlooked. In a broader society where LLM-powered tools are widely accessible, interacting with biased models can shape public perceptions and behaviors, potentially reinforcing problematic social issues over time. This study investigates harmful representations in LLMs, focusing on ethnicity and gender in the Dutch context. Through template-based sentence construction and model probing, we identified potentially harmful representations using both automated and manual content analysis at the lexical and sentence levels, combining quantitative measurements with qualitative insights. Our findings have important ethical, legal, and political implications, challenging the acceptability of such harmful representations and emphasizing the need for effective mitigation strategies. Warning: This paper contains examples of language that some people may find offensive or upsetting.

Artificial intelligence, language models

RIS

Save .RIS

Bibtex

Save .bib

Opinie: AI wordt alleen té slim, als we onszelf dom voordoen external link

Trouw, 2025

Abstract

Wie denkt dat ChatGPT een therapeut kan vervangen, onderschat onze complexe sociale werkelijkheid, betoogt filosoof Marijn Sax.

Artificial intelligence

RIS

Save .RIS

Bibtex

Save .bib