
Upcoming events in 2025:
6 February IViR Lecture Series The UK copyright law’s approach to issues generated by AI |
14 Februari Save the date! Symposium Mediaforum studenteneditie |
10-11 April AlgoSoc International Conference 2025 |


Upcoming events in 2025:
6 February IViR Lecture Series The UK copyright law’s approach to issues generated by AI |
14 Februari Save the date! Symposium Mediaforum studenteneditie |
10-11 April AlgoSoc International Conference 2025 |
The University of Amsterdam is providing funding and support for new or early-stage creative media projects that unearth, explore, and critique the digital economy in India from the perspectives of the labour that supports and sustains it.
Following the successful launch of the Information Law Series Archive in September 2024, ten more volumes have now been made freely available online on the IViR website.
Heb je recent je Master Rechten afgerond of rond je die binnenkort af? Heb je aantoonbare affiniteit met informatierechtelijke onderwerpen zoals intellectuele eigendom, platformregulering, gegevensbescherming, data governance, uitingsvrijheid of AI? Vind je het leuk onderzoek te doen naar actuele vraagstukken in deze domeinen? Dan hebben wij de perfecte startersbaan voor jou!
Copyright
Report{nokey,
title = {Copyright and Generative AI: Opinion of the European Copyright Society},
author = {Dusollier, S. and Kretschmer, M. and Margoni, T. and Mezei, P. and Quintais, J. and Rognstad, O.A.},
url = {https://europeancopyrightsociety.org/wp-content/uploads/2025/02/ecs_opinion_genai_january2025.pdf},
year = {2025},
date = {2025-02-07},
keywords = {Copyright},
}
Artificial intelligence, automated filtering, Content moderation, Foucault, information economics, Platforms, trust
Working paper{nokey,
title = {Trust and Safety in the Age of AI – the economics and practice of the platform-based discourse apparatus},
author = {Weigl, L. and Bodó, B.},
url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5116478},
year = {2025},
date = {2025-01-30},
abstract = {In recent years social media services emerged as key infrastructures for a plethora of societal conversations around politics, values, culture, science, and more. Through their Trust and Safety practices, they are playing a central role in shaping what their users may know, may believe in, what kinds of values, truths and untruths, or opinions they are exposed to. The rapid emergence of various tools, such as AI and the likes brought further complexities to how these societal conversations are conducted online.
On the one hand, platforms started to heavily rely on automated tools and algorithmic agents to identify various forms of speech, some of them flagged for further human review, others being filtered automatically. On the other hand, cheap and ubiquitous access to generative AI systems also produce a flood of new speech on social media platforms.
Content moderation and filtering, as one of the largest ‘Trust and Safety’ activities, is, on the surface, the most visible, and understandable activity which could protect users from all the harms stemming from ignorant or malicious actors in the online space. But, as we argue in this paper, content moderation is much more than that. Platforms, through their AI-human content moderation stack are ordering key societal discourses. The Foucauldian understanding of society emphasizes that discourse is knowledge is power: we know what the discourse reveals to us, and we use this knowledge as power to produce the world around us, render it legible through discourse. This logic, alongside the radically shifting rules of information economics, which reduced the cost of information to zero, challenges the old institutions, rules, procedures, discourses, and subsequent knowledge and power structures.
In this paper, we first explore the practical realities of content moderation based on an expert interview study with Trust and Safety professionals, and a supporting document analysis, based on data published through the DSA Transparency Database. We reconstruct these empirical insights as an analytical model – a discourse apparatus stack – in the Foucauldian framework. This helps to identify the real systemic challenges content moderation faces, but fails to address.},
keywords = {Artificial intelligence, automated filtering, Content moderation, Foucault, information economics, Platforms, trust},
}
Artificial intelligence, Copyright, liability
Article{nokey,
title = {Copyright Liability and Generative AI: What’s the Way Forward?},
author = {Szkalej, K.},
url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5117603},
year = {2025},
date = {2025-01-10},
abstract = {This paper examines the intricate relationship between copyright liability and generative AI, focusing on legal challenges at the output stage of AI content generation. As AI technology advances, questions regarding copyright infringement and attribution of liability have become increasingly pressing and complex, requiring a revision of existing rules and theories. The paper navigates the European copyright framework and offers insights from Swedish copyright law on unharmonized aspects of liability, reviewing key case law from the Court of Justice of the European Union and Swedish courts. Considering the liability of AI users first, the paper emphasizes that while copyright exceptions are relevant in the discussion, national liability rules nuance a liability risk assessment above and beyond the potential applicability of a copyright exception. The analysis centers in particular on the reversed burden of proof introduced by the Swedish Supreme Court in NJA 1994 s 74 (Smultronmålet / Wild strawberries case) and the parameters of permissible transformative or derivative use (adaptations of all sorts), especially the level of similarity allowed between a pre-existing and transformative work, examining in particular NJA 2017 s 75 (Svenska syndabockar / Swedish scapegoats). Moreover, the paper engages in a discussion over the harmonization of transformative use and the exclusive right of adaptation through the right of reproduction in Article 2 InfoSoc Directive. Secondly, the paper examines copyright liability of AI system providers when their technology is used to generate infringing content. While secondary liability remains unharmonized in the EU, thus requiring consideration of national conceptions of such liability and available defences, expansive interpretations of primary liability by the Court of Justice in cases like C-160/15 GS Media, C-527/15 Filmspeler, or C-610/15 Ziggo require a consideration of the question whether AI providers indeed could also be held primarily liable for what users do. In this respect, the analysis considers both the right of communication to the public as well as the right of reproduction. The paper concludes with a forward-looking perspective, arguing in light of available litigation tactics that clarity must emerge through litigation rather than premature legislative reform. It will provide an opportunity for courts to systematize existing rules and liability theories and provide essential guidance for balancing copyright protection with innovation.},
keywords = {Artificial intelligence, Copyright, liability},
}
AI Act, Content moderation, Copyright, DSA, Generative AI, text and data mining, Transparency
Article{nokey,
title = {Generative AI, Copyright and the AI Act},
author = {Quintais, J.},
url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4912701},
doi = {https://doi.org/10.1016/j.clsr.2025.106107},
year = {2025},
date = {2025-01-30},
journal = {Computer Law & Security Review},
volume = {56},
number = {106107},
pages = {},
abstract = {This paper provides a critical analysis of the Artificial Intelligence (AI) Act\'s implications for the European Union (EU) copyright acquis, aiming to clarify the complex relationship between AI regulation and copyright law while identifying areas of legal ambiguity and gaps that may influence future policymaking. The discussion begins with an overview of fundamental copyright concerns related to generative AI, focusing on issues that arise during the input, model, and output stages, and how these concerns intersect with the text and data mining (TDM) exceptions under the Copyright in the Digital Single Market Directive (CDSMD).
The paper then explores the AI Act\'s structure and key definitions relevant to copyright law. The core analysis addresses the AI Act\'s impact on copyright, including the role of TDM in AI model training, the copyright obligations imposed by the Act, requirements for respecting copyright law—particularly TDM opt-outs—and the extraterritorial implications of these provisions. It also examines transparency obligations, compliance mechanisms, and the enforcement framework. The paper further critiques the current regime\'s inadequacies, particularly concerning the fair remuneration of creators, and evaluates potential improvements such as collective licensing and bargaining. It also assesses legislative reform proposals, such as statutory licensing and AI output levies, and concludes with reflections on future directions for integrating AI governance with copyright protection.},
keywords = {AI Act, Content moderation, Copyright, DSA, Generative AI, text and data mining, Transparency},
}
Article{nokey,
title = {Shifting Battlegrounds: Corporate Political Activity in the EU General Data Protection Regulation},
author = {Ocelík, V. and Kolk, A. and Irion, K.},
doi = {https://doi.org/10.1177/00076503241306958},
year = {2025},
date = {2025-01-20},
journal = {Business & Society},
abstract = {Scholarship on corporate political activity (CPA) has remained largely silent on the substance of information strategies that firms utilize to influence policymakers. To address this deficiency, our study is situated in the European Union (EU), where political scientists have noted information strategies to be central to achieving lobbying success; the EU also provides a context of global norm-setting activities, especially with its General Data Protection Regulation (GDPR). Aided by recent advances in the field of unsupervised machine learning, we performed a structural topic model analysis of the entire set of lobby documents submitted during two GDPR consultations, which were obtained via a so-called Freedom of Information request. Our analysis of the substance of information strategies reveals that the two policy phases constitute “shifting battlegrounds,” where firms first seek to influence what is included and excluded in the legislation, after which they engage the more specific interests of other stakeholders. Our main theoretical contribution concerns the identification of two distinct information strategies. Furthermore, we point at the need for more attention for institutional procedures and for the role of other stakeholders’ lobbying activities in CPA research.},
}