Publications
Top Keywords
- Art. 10 EVRM (25)
- Art. 17 CDSM Directive (13)
- Artificial intelligence (76)
- Big data (12)
- Constitutional and administrative law (11)
- Consumer law (11)
- Content moderation (22)
- Copyright (201)
- Cybersecurity (10)
- Data protection (29)
- Data protection law (11)
- Digital Services Act (DSA) (31)
- Digital Single Market (13)
- EU (19)
- EU law (25)
- Europe (12)
- European Union (10)
- Fake news (14)
- Freedom of expression (49)
- Fundamental rights (18)
- GDPR (22)
- Human rights (31)
- Intellectual property (30)
- Internet (24)
- Journalism (15)
- Kluwer Information Law Series (43)
- Licensing (14)
- Media law (28)
- Online platforms (20)
- Patent law (20)
- Personal data (35)
- Platforms (24)
- Privacy (327)
- Regulation (12)
- Social media (11)
- Software (10)
- Surveillance (11)
- Text and Data Mining (TDM) (21)
- Trademark law (15)
- Transparency (19)
Editorial: Interdisciplinary Perspectives on the (Un)fairness of Artificial Intelligence external link
Links
Artificial intelligence
RIS
Bibtex
Dangerous Criminals and Beautiful Prostitutes? Investigating Harmful Representations in Dutch Language Models external link
Abstract
While language-based AI is becoming increasingly popular, ensuring that these systems are socially responsible is essential. Despite their growing impact, large language models (LLMs), the engines of many language-driven applications, remain largely in the black box. Concerns about LLMs reinforcing harmful representations are shared by academia, industries, and the public. In professional contexts, researchers rely on LLMs for computational tasks such as text classification and contextual prediction, during which the risk of perpetuating biases cannot be overlooked. In a broader society where LLM-powered tools are widely accessible, interacting with biased models can shape public perceptions and behaviors, potentially reinforcing problematic social issues over time. This study investigates harmful representations in LLMs, focusing on ethnicity and gender in the Dutch context. Through template-based sentence construction and model probing, we identified potentially harmful representations using both automated and manual content analysis at the lexical and sentence levels, combining quantitative measurements with qualitative insights. Our findings have important ethical, legal, and political implications, challenging the acceptability of such harmful representations and emphasizing the need for effective mitigation strategies. Warning: This paper contains examples of language that some people may find offensive or upsetting.
Links
Artificial intelligence, language models
RIS
Bibtex
CommonsDB feasibility study, part 1 download
Abstract
This is the first part of two parts of a feasibility study for a public registry of public domain and openly licensed works. This registry – called CommonsDB – is currently being developed by a consortium consisting of Open Future, Liccium, the Institute for Information Law, Europeana Foundation, and Wikimedia Sweden as part of a European Commission-funded pilot project running from 1 February 2025 to 31 July 2026.
RIS
Bibtex
Copyright, the AI Act and Extraterritoriality download
Abstract
The Lisbon Council launched Copyright, the AI Act and Extraterritoriality, a timely new policy brief authored by João Pedro Quintais, associate professor, Institute for Information Law, University of Amsterdam. As the European Commission is gearing up for a 2026 review of the directive on copyright in the digital single market and the code of practice for general-purpose artificial intelligence (AI), the publication offers a legally grounded overview of copyright issues across the AI lifecycle – from data training to outputs – and an analysis of how the European AI act interacts with copyright law.
Links
AI Act, Copyright, extraterritoriality
RIS
Bibtex
Annotatie bij Hoge Raad 8 november 2024 (Anne Frank Fonds / Anne Frank Stichting) download
Abstract
Publicatie van wetenschappelijke editie van dagboek van Anne Frank op website met geoblocking-maatregelen voor Nederland. HR stelt prejudiciële vragen aan Hof van Justitie EU. Brengt mogelijkheid om geoblocking door gebruik van VPN- of soortgelijke dienst te omzeilen mee dat sprake is van mededeling aan het publiek in Nederland?
Copyright, Geoblocking
RIS
Bibtex
Annotatie bij Hof van Justitie van de EU 24 oktober 2024 (Kwantum / Vitra) download
Abstract
Auteursrechtelijke bescherming van voorwerpen van toegepaste kunst die in het land van oorsprong niet beschermd zijn door het auteursrecht. De materiële-reciprociteitstoets van art. 2(7) Berner Conventie mag door de Nederlandse rechter niet toegepast omdat het Unierecht en in het bijzonder de Auteursrechtrichtlijn 2001/29/EG niet voorziet in een beperking van de bescherming van werken van toegepaste kunst uit landen buiten de EU.
Copyright
RIS
Bibtex
The Regulation of Disinformation Under the Digital Services Act external link
Abstract
This article critically examines the regulation of disinformation under the EU’s Digital Services Act (DSA). It begins by analysing how the DSA applies to disinformation, discussing how the DSA facilitates the removal of illegal disinformation, and on the other hand, how it can protect users’ freedom of expression against the removal of certain content classified as disinformation. The article then moves to the DSA’s special risk‐based rules, which apply to Very Large Online Platforms in relation to mitigation of systemic risks relating to disinformation, and are to be enforced by the European Commission. We analyse recent regulatory action by the Commission in tackling disinformation within its DSA competencies, and assess these actions from a fundamental rights perspective, focusing on freedom of expression guaranteed under the EU Charter of Fundamental Rights and the European Convention on Human Rights.
Links
Digital Services Act (DSA), disinformation, Freedom of expression, Online platforms
RIS
Bibtex
The Governance of the European Digital Identity Framework Through the Lens of Institutional Mimesis external link
Abstract
The European Commission's decision to expand its 2014 Regulation on electronic identification and trust services toward wallet-based digital identities marked a significant shift in the governance of users' digital identities. The intersection between private digital services, public prerogatives, and individual self-determination raises questions of data governance, notably power conflicts over control and usage. This study investigates the governance of the European Digital Identity Framework using institutional isomorphism to understand how EU policy-making evolves and gains legitimacy by mimicking successful regulatory models like the GDPR. Our analysis shows that the narrowly defined scope of power for supervisory bodies allows greater discretion for Member States, which could make the system vulnerable to abuse. Additionally, the lack of organizational independence among these bodies further complicates governance arrangements.
RIS
Bibtex
More Than Justifications an Analysis of Information Needs in Explanations and Motivations to Disable Personalization external link
Abstract
There is consensus that algorithmic news recommenders should be explainable to inform news readers of potential risks. However, debates continue over which information users need and which stakeholders should access this information. As the debate continues, researchers also call for more control over algorithmic news recommender systems, for example, by turning off personalized recommendations. Despite this call, it is unclear the extent to which news readers will use this feature. To add nuance to the discussion, we analyzed 586 responses to two open-ended questions: i) what information needs to contribute to trustworthiness perceptions of new recommendations, and ii) whether people want the ability to turn off personalization. Our results indicate that most participants found knowing the sources of news items important for trusting a recommendation system. Additionally, more than half of the participants were inclined to disable personalization. The most common reasons to turn off personalization included concerns about bias or filter bubbles and a preference to consume generalized news. These findings suggest that news readers have different information needs for explanations when interacting with an algorithmic news recommender and that many news readers prefer to disable the usage of personalized news recommendations.
Links
control, Digital Services Act (DSA), news recommenders, Personalisation, trust