Opinie: AI wordt alleen té slim, als we onszelf dom voordoen external link

Trouw, 2025

Abstract

Wie denkt dat ChatGPT een therapeut kan vervangen, onderschat onze complexe sociale werkelijkheid, betoogt filosoof Marijn Sax.

ai

Bibtex

Do AI models dream of dolphins in lake Balaton? external link

Kluwer Copyright Blog, 2025

ai, Copyright

Bibtex

Procedural Justice and Judicial AI; Substantiating Explainability Rights with the Values of Contestation external link

Metikoš, L. & Domselaar, I. van
2025

Abstract

The advent of opaque assistive AI in courtrooms has raised concerns about the contestability of these systems, and their impact on procedural justice. The right to an explanation under the GDPR and the AI Act could address the inscrutability of judicial AI for litigants. To substantiate this right in the domain of justice, we examine utilitarian, rights-based (including dignitarian and Dworkinian approaches), and relational theories of procedural justice. These theories reveal diverse perspectives on contestation, which can help shape explainability rights in the context of judicial AI. These theories respectively highlight different values of litigant contestation: it has instrumental value in error correction, and intrinsic value in respecting litigants' dignity, either as rational autonomous agents or as socio-relational beings. These insights help us answer three central and practical questions on how the right to an explanation should be operationalized to enable litigant contestation: should explanations be general or specific, to what extent do explanations need to be faithful to the system's actual behavior or merely provide a plausible approximation, and should more interpretable systems be used, even at the cost of accuracy? These questions are not strictly legal or technical in nature, but also rely on normative considerations. The practical operationalization of explainability will therefore differ between different valuations of litigant contestation of judicial AI.

ai, digital justice, Transparency

Bibtex

Dun & Bradstreet: A Pyrrhic Victory for the Contestation of AI under the GDPR external link

The Law, Ethics & Policy of AI Blog, 2025

Abstract

The CJEU’s ruling in Dun & Bradstreet clarifies how the GDPR’s ‘right to an explanation’ should enable individuals to contest AI-based decision-making. It states that explanations need to be understandable while also respecting trade secrets and privacy concerns in a balanced manner. However, the Court excludes the disclosure of in-depth technical information and also introduces a burdensome balancing procedure. These requirements both strengthen and weaken the ability of individuals to independently assess impactful AI systems, leading to a pyrrhic victory for contestation.

ai, GDPR, Privacy

Bibtex

The rise of technology courts, or: How technology companies re-invent adjudication for a digital world

Computer Law & Security Review, vol. 56, num: 106118, 2025

Abstract

The article “The Rise of Technology Courts” explores the evolving role of courts in the digital world, where technological advancements and artificial intelligence (AI) are transforming traditional adjudication processes. It argues that traditional courts are undergoing a significant transition due to digitization and the increasing influence of technology companies. The paper frames this transformation through the concept of the “sphere of the digital,” which explains how digital technology and AI redefine societal expectations of what courts should be and how they function. The article highlights that technology is not only changing the materiality of courts—moving from physical buildings to digital portals—but also affecting their symbolic function as public institutions. It discusses the emergence of AI-powered judicial services, online dispute resolution (ODR), and technology-driven alternative adjudication bodies like the Meta Oversight Board. These developments challenge the traditional notions of judicial authority, jurisdiction, and legal expertise. The paper concludes that while these technology-driven solutions offer increased efficiency and accessibility, they also raise fundamental questions about the legitimacy, transparency, and independence of adjudicatory bodies. As technology companies continue to shape digital justice, the article also argues that there are lessons to learn for the role and structure of traditional courts to ensure that human rights and public values are upheld.

ai, big tech, digital transformation, digitisation, justice, values

Bibtex

Generative AI and Creative Commons Licences – The Application of Share Alike Obligations to Trained Models, Curated Datasets and AI Output external link

JIPITEC, vol. 15, iss. : 3, 2024

Abstract

This article maps the impact of Share Alike (SA) obligations and copyleft licensing on machine learning, AI training, and AI-generated content. It focuses on the SA component found in some of the Creative Commons (CC) licences, distilling its essential features and layering them onto machine learning and content generation workflows. Based on our analysis, there are three fundamental challenges related to the life cycle of these licences: tracing and establishing copyright-relevant uses during the development phase (training), the interplay of licensing conditions with copyright exceptions and the identification of copyright-protected traces in AI output. Significant problems can arise from several concepts in CC licensing agreements (‘adapted material’ and ‘technical modification’) that could serve as a basis for applying SA conditions to trained models, curated datasets and AI output that can be traced back to CC material used for training purposes. Seeking to transpose Share Alike and copyleft approaches to the world of generative AI, the CC community can only choose between two policy approaches. On the one hand, it can uphold the supremacy of copyright exceptions. In countries and regions that exempt machine-learning processes from the control of copyright holders, this approach leads to far-reaching freedom to use CC resources for AI training purposes. At the same time, it marginalises SA obligations. On the other hand, the CC community can use copyright strategically to extend SA obligations to AI training results and AI output. To achieve this goal, it is necessary to use rights reservation mechanisms, such as the opt-out system available in EU copyright law, and subject the use of CC material in AI training to SA conditions. Following this approach, a tailor-made licence solution can grant AI developers broad freedom to use CC works for training purposes. In exchange for the training permission, however, AI developers would have to accept the obligation to pass on – via a whole chain of contractual obligations – SA conditions to recipients of trained models and end users generating AI output.

ai, Copyright, creative commons, Licensing, machine learning

Bibtex

Prompts tussen vorm en inhoud: de eerste rechtspraak over generatieve AI en het werk download

Auteursrecht, iss. : 3, pp: 129-134, 2024

Abstract

Kan het gebruik van generatieve AI-systemen een auteursrechtelijk beschermd werk opleveren? Twee jaar na de introductie van Dall-E en ChatGPT begint zich enige jurisprudentie te vormen. Daarbij is de kernvraag of het aansturen van dergelijke systemen door middel van prompts (instructies) voldoende is om de output als ‘werk’ te kwalificeren. Dit artikel gaat, mede aan de hand van de vroegste rechtspraak in de Verenigde Staten, China en Europa, dieper in op deze lastige kwestie.

ai, Copyright

Bibtex

Trademark Law, AI-driven Behavioral Advertising, and the Digital Services Act: Toward Source and Parameter Transparency for Consumers, Brand Owners and Competitors external link

Research Handbook on Intellectual Property and Artificial Intelligence, Edward Elgar Publishing, 2022, pp: 309-324, ISBN: 9781800881891

Abstract

In its Proposal for a Digital Services Act (“DSA”), the European Commission highlighted the need for new transparency obligations to arrive at accountable digital services, ensure a fair environment for economic operators and empower consumers. However, the proposed new rules seem to focus on transparency measures for consumers. According to the DSA Proposal, platforms, such as online marketplaces, must ensure that platform users receive information enabling them to understand when and on whose behalf an advertisement is displayed, and which parameters are used to direct advertising to them, including explanations of the logic underlying systems for targeted advertising. Statements addressing the interests of trademark owners and trademark policy are sought in vain. Against this background, the analysis sheds light on AI-driven behavioural advertising practices and the policy considerations underlying the proposed new transparency obligations. In the light of the debate on trademark protection in keyword advertising cases, it will show that not only consumers but also trademark owners have a legitimate interest in receiving information on the parameters that are used to target consumers. The discussion will lead to the insight that lessons from the keyword advertising debate can play an important role in the transparency discourse because they broaden the spectrum of policy rationales and guidelines for new transparency rules. In addition to the current focus on consumer empowerment, the enhancement of information on alternative offers in the marketplace and the strengthening of trust in AI-driven, personalized advertising enter the picture. On balance, there are good reasons to broaden the scope of the DSA initiative and ensure access to transparency information for consumers and trademark owners alike.

ai, Trademark law

Bibtex

The commodification of trust external link

Blockchain & Society Policy Research Lab Research Nodes, num: 1, 2021

Abstract

Fundamental, wide-ranging, and highly consequential transformations take place in interpersonal, and systemic trust relations due to the rapid adoption of complex, planetary-scale digital technological innovations. Trust is remediated by planetary scale techno-social systems, which leads to the privatization of trust production in society, and the ultimate commodification of trust itself. Modern societies rely on communal, public and private logics of trust production. Communal logics produce trust by the group for the group, and are based on familiar, ethnic, religious or tribal relations, professional associations epistemic or value communities, groups with shared location or shared past. Public trust logics developed in the context of the modern state, and produce trust as a free public service. Abstract, institutionalized frameworks, institutions, such as the press, or public education, science, various arms of the bureaucratic state create familiarity, control, and insurance in social, political, and economic relations. Finally, private trust producers sell confidence as a product: lawyers, accountants, credit rating agencies, insurers, but also commercial brands offer trust for a fee. With the emergence of the internet and digitization, a new class of private trust producers emerged. Online reputation management services, distributed ledgers, and AI-based predictive systems are widely adopted technological infrastructures, which are designed to facilitate trust-necessitating social, economic interactions by controlling the past, the present and the future, respectively. These systems enjoy immense economic success, and they are adopted en masse by individuals and institutional actors alike. The emergence of the private, technical means of trust production paves the way towards the widescale commodification of trust, where trust is produced as a commercial activity, conducted by private parties, for economic gain, often far removed from the loci where trust-necessitating social interactions take place. The remediation and consequent privatization and commodification of trust production has a number of potentially adverse social effects: it may decontextualize trust relationships; it removes trust from the local social, cultural relational contexts; it changes the calculus of interpersonal trust relations. Maybe more importantly as more and more social and economic relations are conditional upon having access to, and good standing in private trust infrastructures, commodification turns trust into the question of continuous labor, or devastating exclusion. By invoking Karl Polanyi’s work on fictious commodities, I argue that the privatization, and commodification of trust may have a catastrophic impact on the most fundamental layers of the social fabric.

ai, blockchains, commodification, frontpage, Informatierecht, Karl Polanyi, reputation, trust, trust production

Bibtex

News Recommenders and Cooperative Explainability: Confronting the contextual complexity in AI explanations external link

ai, frontpage, news recommenders, Technologie en recht

Bibtex