Procedural Justice and Judicial AI; Substantiating Explainability Rights with the Values of Contestation external link

Metikoš, L. & Domselaar, I. van
2025

Abstract

The advent of opaque assistive AI in courtrooms has raised concerns about the contestability of these systems, and their impact on procedural justice. The right to an explanation under the GDPR and the AI Act could address the inscrutability of judicial AI for litigants. To substantiate this right in the domain of justice, we examine utilitarian, rights-based (including dignitarian and Dworkinian approaches), and relational theories of procedural justice. These theories reveal diverse perspectives on contestation, which can help shape explainability rights in the context of judicial AI. These theories respectively highlight different values of litigant contestation: it has instrumental value in error correction, and intrinsic value in respecting litigants' dignity, either as rational autonomous agents or as socio-relational beings. These insights help us answer three central and practical questions on how the right to an explanation should be operationalized to enable litigant contestation: should explanations be general or specific, to what extent do explanations need to be faithful to the system's actual behavior or merely provide a plausible approximation, and should more interpretable systems be used, even at the cost of accuracy? These questions are not strictly legal or technical in nature, but also rely on normative considerations. The practical operationalization of explainability will therefore differ between different valuations of litigant contestation of judicial AI.

Artificial intelligence, digital justice, Transparency

RIS

Save .RIS

Bibtex

Save .bib

Copyright Liability and Generative AI: What’s the Way Forward? download

Nordic Intellectual Property Law Review, iss. : 1, pp: 92-115, 2025

Abstract

The intersection of copyright liability and generative AI has become one of the most complex and debated issues in the field of copyright law. AI systems have advanced significantly to allow the creation of fantastic new content but they are also capable of producing outputs that evoke, adapt, or recreate content that is protected by copyright law, sparking several infringement proceedings against AI companies, particularly in the US. With this rapid evolution comes the need to re-examine existing legal frameworks and theories. In this contribution, I would like to focus on liability challenges at the output stage of AI content generation and share some insights from Sweden to finally ponder about possible paths forward.

Artificial intelligence, Copyright, Generative AI, liability

RIS

Save .RIS

Bibtex

Save .bib

Dun & Bradstreet: A Pyrrhic Victory for the Contestation of AI under the GDPR external link

The Law, Ethics & Policy of AI Blog, 2025

Abstract

The CJEU’s ruling in Dun & Bradstreet clarifies how the GDPR’s ‘right to an explanation’ should enable individuals to contest AI-based decision-making. It states that explanations need to be understandable while also respecting trade secrets and privacy concerns in a balanced manner. However, the Court excludes the disclosure of in-depth technical information and also introduces a burdensome balancing procedure. These requirements both strengthen and weaken the ability of individuals to independently assess impactful AI systems, leading to a pyrrhic victory for contestation.

Artificial intelligence, GDPR, Privacy

RIS

Save .RIS

Bibtex

Save .bib

Copyright Data Improvement for AI Licensing – The Role of Content Moderation and Text and Data Mining Rules download

In: A Research Agenda for EU Copyright Law, E. Bonadio & C. Sganga (eds.), , Edward Elgar Publishing, 2025, pp: 105-128, ISBN: 9781803927312

Artificial intelligence, Content moderation, Copyright, Text and Data Mining (TDM)

RIS

Save .RIS

Bibtex

Save .bib

Towards Planet Proof Computing: Law and Policy of Data Centre Sustainability in the European Union download

Commins, J. & Irion, K.
Technology and Regulation, vol. 25, pp: 1-36, 2025

Abstract

Our society’s growing reliance on digital technologies such as AI incurs an ever-growing ecological footprint. The EU regulation of the data centre sector aims to achieve climate-neutral, energy-efficient and sustainable data centres by no later than 2030. This article unpacks the EU law and policy which aims on improving energy efficiency, recycling equipment and increasing reporting and transparency obligations. In 2025 the Commission will present a report based on information reported by data centre operators and in light of the new evidence review its policy. Further regulation should aim to translate reporting requirements into binding sustainability targets to contain rebound effects of the data centre industry while strengthening the public value orientation of the industry.

Artificial intelligence, digitalisation, EU law

RIS

Save .RIS

Bibtex

Save .bib

The rise of technology courts, or: How technology companies re-invent adjudication for a digital world

Computer Law & Security Review, vol. 56, num: 106118, 2025

Abstract

The article “The Rise of Technology Courts” explores the evolving role of courts in the digital world, where technological advancements and artificial intelligence (AI) are transforming traditional adjudication processes. It argues that traditional courts are undergoing a significant transition due to digitization and the increasing influence of technology companies. The paper frames this transformation through the concept of the “sphere of the digital,” which explains how digital technology and AI redefine societal expectations of what courts should be and how they function. The article highlights that technology is not only changing the materiality of courts—moving from physical buildings to digital portals—but also affecting their symbolic function as public institutions. It discusses the emergence of AI-powered judicial services, online dispute resolution (ODR), and technology-driven alternative adjudication bodies like the Meta Oversight Board. These developments challenge the traditional notions of judicial authority, jurisdiction, and legal expertise. The paper concludes that while these technology-driven solutions offer increased efficiency and accessibility, they also raise fundamental questions about the legitimacy, transparency, and independence of adjudicatory bodies. As technology companies continue to shape digital justice, the article also argues that there are lessons to learn for the role and structure of traditional courts to ensure that human rights and public values are upheld.

Artificial intelligence, big tech, digital transformation, digitisation, justice, values

RIS

Save .RIS

Bibtex

Save .bib

The paradox of lawful text and data mining? Some experiences from the research sector and where we (should) go from here external link

GRUR International, vol. 74, iss. : 4, pp: 307-319, 2025

Abstract

Scientific research can be tricky business. This paper critically explores the 'lawful access' requirement in European copyright law which applies to text and data mining (TDM) carried out for the purpose of scientific research. Whereas TDM is essential for data analysis, artificial intelligence (AI) and innovation, the paper argues that the 'lawful access' requirement in Article 3 CDSM Directive may actually restrict research by complicating the applicability of the TDM provision or even rendering it inoperable. Although the requirement is intended to ensure that researchers act in good faith before deploying TMD tools for purposes such as machine learning, it forces them to ask for permission to access data, for example by taking out a subscription to a service, and for that reason provides the opportunity for copyright holders to apply all sorts of commercial strategies to set the legal and technological parameters of access and potentially even circumvent the mandatory character of the provision. The paper concludes by drawing on insights from the recent European Commission study 'Improving access to and reuse of research results, publications and data for scientific purposes' that offer essential perspectives for the future of TDM, and by suggesting a number of paths forward that EU Member States can take already now in order to support a more predictable and reliable legal regime for scientific TDM and potentially code mining to foster innovation.

Artificial intelligence, CDSM Directive, Copyright, Text and Data Mining (TDM)

RIS

Save .RIS

Bibtex

Save .bib

Trust and Safety in the Age of AI – the economics and practice of the platform-based discourse apparatus external link

Abstract

In recent years social media services emerged as key infrastructures for a plethora of societal conversations around politics, values, culture, science, and more. Through their Trust and Safety practices, they are playing a central role in shaping what their users may know, may believe in, what kinds of values, truths and untruths, or opinions they are exposed to. The rapid emergence of various tools, such as AI and the likes brought further complexities to how these societal conversations are conducted online. On the one hand, platforms started to heavily rely on automated tools and algorithmic agents to identify various forms of speech, some of them flagged for further human review, others being filtered automatically. On the other hand, cheap and ubiquitous access to generative AI systems also produce a flood of new speech on social media platforms. Content moderation and filtering, as one of the largest ‘Trust and Safety’ activities, is, on the surface, the most visible, and understandable activity which could protect users from all the harms stemming from ignorant or malicious actors in the online space. But, as we argue in this paper, content moderation is much more than that. Platforms, through their AI-human content moderation stack are ordering key societal discourses. The Foucauldian understanding of society emphasizes that discourse is knowledge is power: we know what the discourse reveals to us, and we use this knowledge as power to produce the world around us, render it legible through discourse. This logic, alongside the radically shifting rules of information economics, which reduced the cost of information to zero, challenges the old institutions, rules, procedures, discourses, and subsequent knowledge and power structures. In this paper, we first explore the practical realities of content moderation based on an expert interview study with Trust and Safety professionals, and a supporting document analysis, based on data published through the DSA Transparency Database. We reconstruct these empirical insights as an analytical model – a discourse apparatus stack – in the Foucauldian framework. This helps to identify the real systemic challenges content moderation faces, but fails to address.

Artificial intelligence, automated filtering, Content moderation, Foucault, information economics, Platforms, trust

RIS

Save .RIS

Bibtex

Save .bib

Copyright Liability and Generative AI: What’s the Way Forward? external link

Abstract

This paper examines the intricate relationship between copyright liability and generative AI, focusing on legal challenges at the output stage of AI content generation. As AI technology advances, questions regarding copyright infringement and attribution of liability have become increasingly pressing and complex, requiring a revision of existing rules and theories. The paper navigates the European copyright framework and offers insights from Swedish copyright law on unharmonized aspects of liability, reviewing key case law from the Court of Justice of the European Union and Swedish courts. Considering the liability of AI users first, the paper emphasizes that while copyright exceptions are relevant in the discussion, national liability rules nuance a liability risk assessment above and beyond the potential applicability of a copyright exception. The analysis centers in particular on the reversed burden of proof introduced by the Swedish Supreme Court in NJA 1994 s 74 (Smultronmålet / Wild strawberries case) and the parameters of permissible transformative or derivative use (adaptations of all sorts), especially the level of similarity allowed between a pre-existing and transformative work, examining in particular NJA 2017 s 75 (Svenska syndabockar / Swedish scapegoats). Moreover, the paper engages in a discussion over the harmonization of transformative use and the exclusive right of adaptation through the right of reproduction in Article 2 InfoSoc Directive. Secondly, the paper examines copyright liability of AI system providers when their technology is used to generate infringing content. While secondary liability remains unharmonized in the EU, thus requiring consideration of national conceptions of such liability and available defences, expansive interpretations of primary liability by the Court of Justice in cases like C-160/15 GS Media, C-527/15 Filmspeler, or C-610/15 Ziggo require a consideration of the question whether AI providers indeed could also be held primarily liable for what users do. In this respect, the analysis considers both the right of communication to the public as well as the right of reproduction. The paper concludes with a forward-looking perspective, arguing in light of available litigation tactics that clarity must emerge through litigation rather than premature legislative reform. It will provide an opportunity for courts to systematize existing rules and liability theories and provide essential guidance for balancing copyright protection with innovation.

Artificial intelligence, Copyright, liability

RIS

Save .RIS

Bibtex

Save .bib

Generative AI and Creative Commons Licences – The Application of Share Alike Obligations to Trained Models, Curated Datasets and AI Output external link

JIPITEC, vol. 15, iss. : 3, 2024

Abstract

This article maps the impact of Share Alike (SA) obligations and copyleft licensing on machine learning, AI training, and AI-generated content. It focuses on the SA component found in some of the Creative Commons (CC) licences, distilling its essential features and layering them onto machine learning and content generation workflows. Based on our analysis, there are three fundamental challenges related to the life cycle of these licences: tracing and establishing copyright-relevant uses during the development phase (training), the interplay of licensing conditions with copyright exceptions and the identification of copyright-protected traces in AI output. Significant problems can arise from several concepts in CC licensing agreements (‘adapted material’ and ‘technical modification’) that could serve as a basis for applying SA conditions to trained models, curated datasets and AI output that can be traced back to CC material used for training purposes. Seeking to transpose Share Alike and copyleft approaches to the world of generative AI, the CC community can only choose between two policy approaches. On the one hand, it can uphold the supremacy of copyright exceptions. In countries and regions that exempt machine-learning processes from the control of copyright holders, this approach leads to far-reaching freedom to use CC resources for AI training purposes. At the same time, it marginalises SA obligations. On the other hand, the CC community can use copyright strategically to extend SA obligations to AI training results and AI output. To achieve this goal, it is necessary to use rights reservation mechanisms, such as the opt-out system available in EU copyright law, and subject the use of CC material in AI training to SA conditions. Following this approach, a tailor-made licence solution can grant AI developers broad freedom to use CC works for training purposes. In exchange for the training permission, however, AI developers would have to accept the obligation to pass on – via a whole chain of contractual obligations – SA conditions to recipients of trained models and end users generating AI output.

Artificial intelligence, Copyright, creative commons, Licensing, machine learning

RIS

Save .RIS

Bibtex

Save .bib