A Procedural Sedative: The GDPR’s Right to an Explanation download

Data, Cybersecurity and Privacy (DCSP), iss. : 18&19, pp: 24-26, 2025

Abstract

What remedies do you have when AI errs, when it discriminates, or harms you in some other way? How can we hold organizations accountable when they cause people harm during the development, distribution, or use of AI? Arguably, the first step is understanding how the system in question works. To this end, the right to an explanation, provided in EU law under the GDPR and the AI Act, is one of the most important remedies individuals have to contest AI.

AI Act, Artificial intelligence, GDPR

RIS

Save .RIS

Bibtex

Save .bib

Win-Win: How to Remove Copyright Obstacles to AI Training While Ensuring Author Remuneration (and Why the AI Act Fails to do the Magic) external link

Chicago-Kent Law Review, vol. 100, iss. : 1, pp: 7-55,

Abstract

In the debate on AI training and copyright, the focus is often on the use of protected works during the AI training phase (input perspective). To reconcile training objectives with authors' fair remuneration interest, however, it is advisable to adopt an output perspective and focus on literary and artistic productions generated by fully-trained AI systems that are offered in the marketplace. Implementing output-based remuneration systems, lawmakers can establish a legal framework that supports the development of unbiased, high quality AI models while, at the same time, ensuring that authors receive a fair remuneration for the use of literary and artistic works for AI training purposes – a fair remuneration that softens displacement effects in the market for literary and artistic creations where human authors face shrinking market share and loss of income. Instead of imposing payment obligations and administrative burdens on AI developers during the AI training phase, output-based remuneration systems offer the chance of giving AI trainers far-reaching freedom. Without exposing AI developers to heavy administrative and financial burdens, lawmakers can permit the use of the full spectrum of human literary and artistic resources. Once fully developed AI systems are brought to the market, however, providers of these systems are obliged to compensate authors for the unbridled freedom to use human creations during the AI training phase and displacement effects caused by AI systems that are capable of mimicking human literary and artistic works. As the analysis shows, the input-based remuneration approach in the EU – with rights reservations and complex transparency rules blocking access to AI training resources – is likely to reduce the attractiveness of the EU as a region for AI development. Moreover, the regulatory barriers posed by EU copyright law and the AI Act may marginalize the messages and values conveyed by European cultural expressions in AI training datasets and AI output. Considering the legal and practical difficulties resulting from the EU approach, lawmakers in other regions should refrain from following the EU model. As an alternative, they should explore output-based remuneration mechanisms. In contrast to the burdensome EU system that requires the payment of remuneration for access to human AI training resources, an output-based approach does not weaken the position of the domestic high-tech sector: AI developers are free to use human creations as training material. Once fully developed AI systems are offered in the marketplace, all providers of AI systems capable of producing literary and artistic output are subject to the same payment obligation and remuneration scheme – regardless of whether they are local or foreign companies. The advantages of this alternative approach are evident. Offering broad freedom to use human creations for AI training, an output-based approach is conducive to AI development. It also bans the risk of marginalizing the messages and values conveyed by a country’s literary and artistic expressions.

Artificial intelligence, Copyright, remuneration

RIS

Save .RIS

Bibtex

Save .bib

Copyright and the Expression Engine: Idea and Expression in AI-Assisted Creations external link

Chicago-Kent Law Review, vol. 100, iss. : 1, pp: 251-264, 2025

Artificial intelligence, Copyright

RIS

Save .RIS

Bibtex

Save .bib

Public consultation on the legislative proposal for a Cloud and AI Development Act download

2025

Abstract

I welcome the opportunity to respond to the European Commissions public consultation on the proposed AI Cloud and Development Act. I make this submission in my capacity as an academic with expertise in European Union (EU) data centres sustainability policy. In my submission I will address the following issues: 1. EUs sustainability commitments in relation to digitalisation and, in particular, data centres; 2. The requirements for data centre sector sustainability; and 3. The case for public value-oriented digital infrastructure.

Artificial intelligence

RIS

Save .RIS

Bibtex

Save .bib

A nightmare to control: Legal and organizational challenges around the procurement of journalistic AI from external technology providers external link

Piasecki, S. & Helberger, N.
The Information Society, vol. 41, iss. : 3, pp: 173-194, 2025

Abstract

Little research has explored the process of procuring AI systems in the media from the perspective of contractual terms and conditions. It’s importance is underscored by the emerging regulatory framework coming from Brussels that embraces private ordering through mechanisms such as negotiations, instructions, and standardization. This article addresses the following research questions: How are journalistic AI procurement processes perceived by professionals? What are the practical and legal obstacles experienced in negotiating procurement conditions? How to improve media organizations’ contractual negotiation power? The study is grounded in 12 semi-structured interviews with members of media organizations (lawyers, technologists, and managers) and an analysis of 16 terms and conditions of companies providing AI systems. Based on its findings, it strives to propose a contractual counter-power for (especially smaller and local) media actors interested in using journalistic AI.

Artificial intelligence, Journalism

RIS

Save .RIS

Bibtex

Save .bib

Editorial: Interdisciplinary Perspectives on the (Un)fairness of Artificial Intelligence external link

Starke, C., Blanke, T., Helberger, N., Smets, S. & Vreese, C.H. de
Minds and Machines, vol. 35, num: 22, 2025

Artificial intelligence

RIS

Save .RIS

Bibtex

Save .bib

Dangerous Criminals and Beautiful Prostitutes? Investigating Harmful Representations in Dutch Language Models external link

Lin, Z., Trogrlić, G., Vreese, C.H. de & Helberger, N.
FAccT '25: Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency, pp: 11005 - 1014, 2025

Abstract

While language-based AI is becoming increasingly popular, ensuring that these systems are socially responsible is essential. Despite their growing impact, large language models (LLMs), the engines of many language-driven applications, remain largely in the black box. Concerns about LLMs reinforcing harmful representations are shared by academia, industries, and the public. In professional contexts, researchers rely on LLMs for computational tasks such as text classification and contextual prediction, during which the risk of perpetuating biases cannot be overlooked. In a broader society where LLM-powered tools are widely accessible, interacting with biased models can shape public perceptions and behaviors, potentially reinforcing problematic social issues over time. This study investigates harmful representations in LLMs, focusing on ethnicity and gender in the Dutch context. Through template-based sentence construction and model probing, we identified potentially harmful representations using both automated and manual content analysis at the lexical and sentence levels, combining quantitative measurements with qualitative insights. Our findings have important ethical, legal, and political implications, challenging the acceptability of such harmful representations and emphasizing the need for effective mitigation strategies. Warning: This paper contains examples of language that some people may find offensive or upsetting.

Artificial intelligence, language models

RIS

Save .RIS

Bibtex

Save .bib

Opinie: AI wordt alleen té slim, als we onszelf dom voordoen external link

Trouw, 2025

Abstract

Wie denkt dat ChatGPT een therapeut kan vervangen, onderschat onze complexe sociale werkelijkheid, betoogt filosoof Marijn Sax.

Artificial intelligence

RIS

Save .RIS

Bibtex

Save .bib

AI governance in the spotlight: an empirical analysis of Dutch political parties’ strategies for the 2023 elections external link

Morosoli, S., Kieslich, K., Resendez, V. & Drunen, M. van
Journal of Information Technology & Politics, 2025

Abstract

AI-based technologies are having an increasing impact on society, which raises the question of how this technology will be addressed politically. Thereby, political actors have a dual role to play: They can provide investment to enhance the development and subsequent adoption of these systems while also bearing the responsibility of safeguarding citizens from harm. Hereby, the degree of politicization of the topic, i.e. if a topic is part of the public and political debate, has an immense influence on the political approach to tackle the issue. The more a topic is politicized, the more urgency political parties experience to develop concrete governance approaches. Yet, existing research has not analyzed party programs in terms of discourse around artificial intelligence and policy recommendations. This study focuses on the Netherlands and explores how Dutch political parties discuss AI in their political programs for the 2023 election. We conducted a manual content analysis of all party manifestos for the 2023 elections. Our analysis shows that most parties do not place a big emphasis on AI. And if so, most of the policy proposals are rather reactive to issues that happened in the past, rather than taking a prospective governance approach.

Artificial intelligence, governance, Politics

RIS

Save .RIS

Bibtex

Save .bib

Do AI models dream of dolphins in lake Balaton? external link

Kluwer Copyright Blog, 2025

Artificial intelligence, Copyright

RIS

Save .RIS

Bibtex

Save .bib