Copyright Liability and Generative AI: What’s the Way Forward? external link

Abstract

This paper examines the intricate relationship between copyright liability and generative AI, focusing on legal challenges at the output stage of AI content generation. As AI technology advances, questions regarding copyright infringement and attribution of liability have become increasingly pressing and complex, requiring a revision of existing rules and theories. The paper navigates the European copyright framework and offers insights from Swedish copyright law on unharmonized aspects of liability, reviewing key case law from the Court of Justice of the European Union and Swedish courts. Considering the liability of AI users first, the paper emphasizes that while copyright exceptions are relevant in the discussion, national liability rules nuance a liability risk assessment above and beyond the potential applicability of a copyright exception. The analysis centers in particular on the reversed burden of proof introduced by the Swedish Supreme Court in NJA 1994 s 74 (Smultronmålet / Wild strawberries case) and the parameters of permissible transformative or derivative use (adaptations of all sorts), especially the level of similarity allowed between a pre-existing and transformative work, examining in particular NJA 2017 s 75 (Svenska syndabockar / Swedish scapegoats). Moreover, the paper engages in a discussion over the harmonization of transformative use and the exclusive right of adaptation through the right of reproduction in Article 2 InfoSoc Directive. Secondly, the paper examines copyright liability of AI system providers when their technology is used to generate infringing content. While secondary liability remains unharmonized in the EU, thus requiring consideration of national conceptions of such liability and available defences, expansive interpretations of primary liability by the Court of Justice in cases like C-160/15 GS Media, C-527/15 Filmspeler, or C-610/15 Ziggo require a consideration of the question whether AI providers indeed could also be held primarily liable for what users do. In this respect, the analysis considers both the right of communication to the public as well as the right of reproduction. The paper concludes with a forward-looking perspective, arguing in light of available litigation tactics that clarity must emerge through litigation rather than premature legislative reform. It will provide an opportunity for courts to systematize existing rules and liability theories and provide essential guidance for balancing copyright protection with innovation.

Artificial intelligence, Copyright, liability

Bibtex

Article{nokey, title = {Copyright Liability and Generative AI: What’s the Way Forward?}, author = {Szkalej, K.}, url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5117603}, year = {2025}, date = {2025-01-10}, abstract = {This paper examines the intricate relationship between copyright liability and generative AI, focusing on legal challenges at the output stage of AI content generation. As AI technology advances, questions regarding copyright infringement and attribution of liability have become increasingly pressing and complex, requiring a revision of existing rules and theories. The paper navigates the European copyright framework and offers insights from Swedish copyright law on unharmonized aspects of liability, reviewing key case law from the Court of Justice of the European Union and Swedish courts. Considering the liability of AI users first, the paper emphasizes that while copyright exceptions are relevant in the discussion, national liability rules nuance a liability risk assessment above and beyond the potential applicability of a copyright exception. The analysis centers in particular on the reversed burden of proof introduced by the Swedish Supreme Court in NJA 1994 s 74 (Smultronmålet / Wild strawberries case) and the parameters of permissible transformative or derivative use (adaptations of all sorts), especially the level of similarity allowed between a pre-existing and transformative work, examining in particular NJA 2017 s 75 (Svenska syndabockar / Swedish scapegoats). Moreover, the paper engages in a discussion over the harmonization of transformative use and the exclusive right of adaptation through the right of reproduction in Article 2 InfoSoc Directive. Secondly, the paper examines copyright liability of AI system providers when their technology is used to generate infringing content. While secondary liability remains unharmonized in the EU, thus requiring consideration of national conceptions of such liability and available defences, expansive interpretations of primary liability by the Court of Justice in cases like C-160/15 GS Media, C-527/15 Filmspeler, or C-610/15 Ziggo require a consideration of the question whether AI providers indeed could also be held primarily liable for what users do. In this respect, the analysis considers both the right of communication to the public as well as the right of reproduction. The paper concludes with a forward-looking perspective, arguing in light of available litigation tactics that clarity must emerge through litigation rather than premature legislative reform. It will provide an opportunity for courts to systematize existing rules and liability theories and provide essential guidance for balancing copyright protection with innovation.}, keywords = {Artificial intelligence, Copyright, liability}, }

How the EU Outsources the Task of Human Rights Protection to Platforms and Users: The Case of UGC Monetization external link

Senftleben, M., Quintais, J. & Meiring, A.
Berkeley Technology Law Journal, vol. 38, iss. : 3, pp: 933-1010, 2024

Abstract

With the shift from the traditional safe harbor for hosting to statutory content filtering and licensing obligations, EU copyright law has substantially curtailed the freedom of users to upload and share their content creations. Seeking to avoid overbroad inroads into freedom of expression, EU law obliges online platforms and the creative industry to take into account human rights when coordinating their content filtering actions. Platforms must also establish complaint and redress procedures for users. The European Commission will initiate stakeholder dialogues to identify best practices. These “safety valves” in the legislative package, however, are mere fig leaves. Instead of safeguarding human rights, the EU legislator outsources human rights obligations to the platform industry. At the same time, the burden of policing content moderation systems is imposed on users who are unlikely to bring complaints in each individual case. The new legislative design in the EU will thus “conceal” human rights violations instead of bringing them to light. Nonetheless, the DSA rests on the same – highly problematic – approach. Against this background, the paper discusses the weakening – and potential loss – of fundamental freedoms as a result of the departure from the traditional notice-and-takedown approach. Adding a new element to the ongoing debate on content licensing and filtering, the analysis will devote particular attention to the fact that EU law, for the most part, has left untouched the private power of platforms to determine the “house rules” governing the most popular copyright-owner reaction to detected matches between protected works and content uploads: the (algorithmic) monetization of that content. Addressing the “legal vacuum” in the field of content monetization, the analysis explores outsourcing and concealment risks in this unregulated space. Focusing on large-scale platforms for user-generated content, such as YouTube, Instagram and TikTok, two normative problems come to the fore: (1) the fact that rightholders, when opting for monetization, de facto monetize not only their own rights but also the creative input of users; (2) the fact that user creativity remains unremunerated as long as the monetization option is only available to rightholders. As a result of this configuration, the monetization mechanism disregards users’ right to (intellectual) property and discriminates against user creativity. Against this background, we discuss whether the DSA provisions that seek to ensure transparency of content moderation actions and terms and conditions offer useful sources of information that could empower users. Moreover, we raise the question whether the detailed regulation of platform actions in the DSA may resolve the described human rights dilemmas to some extent.

Artificial intelligence, Content moderation, Copyright, derivative works, discrimination, Freedom of expression, Human rights, liability, proportionality, user-generated content

Bibtex

Article{nokey, title = {How the EU Outsources the Task of Human Rights Protection to Platforms and Users: The Case of UGC Monetization}, author = {Senftleben, M. and Quintais, J. and Meiring, A.}, url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4421150}, year = {2024}, date = {2024-01-23}, journal = {Berkeley Technology Law Journal}, volume = {38}, issue = {3}, pages = {933-1010}, abstract = {With the shift from the traditional safe harbor for hosting to statutory content filtering and licensing obligations, EU copyright law has substantially curtailed the freedom of users to upload and share their content creations. Seeking to avoid overbroad inroads into freedom of expression, EU law obliges online platforms and the creative industry to take into account human rights when coordinating their content filtering actions. Platforms must also establish complaint and redress procedures for users. The European Commission will initiate stakeholder dialogues to identify best practices. These “safety valves” in the legislative package, however, are mere fig leaves. Instead of safeguarding human rights, the EU legislator outsources human rights obligations to the platform industry. At the same time, the burden of policing content moderation systems is imposed on users who are unlikely to bring complaints in each individual case. The new legislative design in the EU will thus “conceal” human rights violations instead of bringing them to light. Nonetheless, the DSA rests on the same – highly problematic – approach. Against this background, the paper discusses the weakening – and potential loss – of fundamental freedoms as a result of the departure from the traditional notice-and-takedown approach. Adding a new element to the ongoing debate on content licensing and filtering, the analysis will devote particular attention to the fact that EU law, for the most part, has left untouched the private power of platforms to determine the “house rules” governing the most popular copyright-owner reaction to detected matches between protected works and content uploads: the (algorithmic) monetization of that content. Addressing the “legal vacuum” in the field of content monetization, the analysis explores outsourcing and concealment risks in this unregulated space. Focusing on large-scale platforms for user-generated content, such as YouTube, Instagram and TikTok, two normative problems come to the fore: (1) the fact that rightholders, when opting for monetization, de facto monetize not only their own rights but also the creative input of users; (2) the fact that user creativity remains unremunerated as long as the monetization option is only available to rightholders. As a result of this configuration, the monetization mechanism disregards users’ right to (intellectual) property and discriminates against user creativity. Against this background, we discuss whether the DSA provisions that seek to ensure transparency of content moderation actions and terms and conditions offer useful sources of information that could empower users. Moreover, we raise the question whether the detailed regulation of platform actions in the DSA may resolve the described human rights dilemmas to some extent.}, keywords = {Artificial intelligence, Content moderation, Copyright, derivative works, discrimination, Freedom of expression, Human rights, liability, proportionality, user-generated content}, }

Outsourcing Human Rights Obligations and Concealing Human Rights Deficits: The Example of Monetizing User-Generated Content Under the CDSM Directive and the Digital Services Act external link

Senftleben, M., Quintais, J. & Meiring, A.

Abstract

With the shift from the traditional safe harbor for hosting to statutory content filtering and licensing obligations, EU copyright law has substantially curtailed the freedom of users to upload and share their content creations. Seeking to avoid overbroad inroads into freedom of expression, EU law obliges online platforms and the creative industry to take into account human rights when coordinating their content filtering actions. Platforms must also establish complaint and redress procedures for users. The European Commission will initiate stakeholder dialogues to identify best practices. These “safety valves” in the legislative package, however, are mere fig leaves. Instead of safeguarding human rights, the EU legislator outsources human rights obligations to the platform industry. At the same time, the burden of policing content moderation systems is imposed on users who are unlikely to bring complaints in each individual case. The new legislative design in the EU will thus “conceal” human rights violations instead of bringing them to light. Nonetheless, the DSA rests on the same – highly problematic – approach. Against this background, the paper discusses the weakening – and potential loss – of fundamental freedoms as a result of the departure from the traditional notice-and-takedown approach. Adding a new element to the ongoing debate on content licensing and filtering, the analysis will devote particular attention to the fact that EU law, for the most part, has left untouched the private power of platforms to determine the “house rules” governing the most popular copyright-owner reaction to detected matches between protected works and content uploads: the (algorithmic) monetization of that content. Addressing the “legal vacuum” in the field of content monetization, the analysis explores outsourcing and concealment risks in this unregulated space. Focusing on large-scale platforms for user-generated content, such as YouTube, Instagram and TikTok, two normative problems come to the fore: (1) the fact that rightholders, when opting for monetization, de facto monetize not only their own rights but also the creative input of users; (2) the fact that user creativity remains unremunerated as long as the monetization option is only available to rightholders. As a result of this configuration, the monetization mechanism disregards users’ right to (intellectual) property and discriminates against user creativity. Against this background, we discuss whether the DSA provisions that seek to ensure transparency of content moderation actions and terms and conditions offer useful sources of information that could empower users. Moreover, we raise the question whether the detailed regulation of platform actions in the DSA may resolve the described human rights dilemmas to some extent.

Artificial intelligence, Content moderation, Copyright, derivative works, discrimination, Freedom of expression, Human rights, liability, user-generated content

Bibtex

Online publication{nokey, title = {Outsourcing Human Rights Obligations and Concealing Human Rights Deficits: The Example of Monetizing User-Generated Content Under the CDSM Directive and the Digital Services Act}, author = {Senftleben, M. and Quintais, J. and Meiring, A.}, url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4421150}, year = {}, date = {DATE ERROR: pub_date = }, abstract = {With the shift from the traditional safe harbor for hosting to statutory content filtering and licensing obligations, EU copyright law has substantially curtailed the freedom of users to upload and share their content creations. Seeking to avoid overbroad inroads into freedom of expression, EU law obliges online platforms and the creative industry to take into account human rights when coordinating their content filtering actions. Platforms must also establish complaint and redress procedures for users. The European Commission will initiate stakeholder dialogues to identify best practices. These “safety valves” in the legislative package, however, are mere fig leaves. Instead of safeguarding human rights, the EU legislator outsources human rights obligations to the platform industry. At the same time, the burden of policing content moderation systems is imposed on users who are unlikely to bring complaints in each individual case. The new legislative design in the EU will thus “conceal” human rights violations instead of bringing them to light. Nonetheless, the DSA rests on the same – highly problematic – approach. Against this background, the paper discusses the weakening – and potential loss – of fundamental freedoms as a result of the departure from the traditional notice-and-takedown approach. Adding a new element to the ongoing debate on content licensing and filtering, the analysis will devote particular attention to the fact that EU law, for the most part, has left untouched the private power of platforms to determine the “house rules” governing the most popular copyright-owner reaction to detected matches between protected works and content uploads: the (algorithmic) monetization of that content. Addressing the “legal vacuum” in the field of content monetization, the analysis explores outsourcing and concealment risks in this unregulated space. Focusing on large-scale platforms for user-generated content, such as YouTube, Instagram and TikTok, two normative problems come to the fore: (1) the fact that rightholders, when opting for monetization, de facto monetize not only their own rights but also the creative input of users; (2) the fact that user creativity remains unremunerated as long as the monetization option is only available to rightholders. As a result of this configuration, the monetization mechanism disregards users’ right to (intellectual) property and discriminates against user creativity. Against this background, we discuss whether the DSA provisions that seek to ensure transparency of content moderation actions and terms and conditions offer useful sources of information that could empower users. Moreover, we raise the question whether the detailed regulation of platform actions in the DSA may resolve the described human rights dilemmas to some extent.}, keywords = {Artificial intelligence, Content moderation, Copyright, derivative works, discrimination, Freedom of expression, Human rights, liability, user-generated content}, }

The Chilling Effect of Liability for Online Reader Comments external link

European Human Rights Law Review, vol. 2017, num: 4, pp: 387-393, 2017

Abstract

This article assesses how the European Court of Human Rights has responded to the argument that holding online news media liable for reader comments has a chilling effect on freedom of expression. The article demonstrates how the Court first responded by dismissing the argument, and focused on the apparent lack of evidence for any such chilling effect. The article then argues that the Court has moved away from its initial rejection, and now accepts that a potential chilling effect, even without evidence, is integral to deciding whether online news media should be liable for reader comments. Finally, the article argues that this latter view is consistent with the Court’s precedent in other areas of freedom of expression law where a similar chilling effect may also arise.

chilling effect, defamation, electronic publishing, Freedom of expression, frontpage, Human rights, liability, online reader comments

Bibtex

Article{Fahy2017b, title = {The Chilling Effect of Liability for Online Reader Comments}, author = {Fahy, R.}, url = {https://www.ivir.nl/publicaties/download/EHRLR_2017_4.pdf}, year = {0824}, date = {2017-08-24}, journal = {European Human Rights Law Review}, volume = {2017}, number = {4}, pages = {387-393}, abstract = {This article assesses how the European Court of Human Rights has responded to the argument that holding online news media liable for reader comments has a chilling effect on freedom of expression. The article demonstrates how the Court first responded by dismissing the argument, and focused on the apparent lack of evidence for any such chilling effect. The article then argues that the Court has moved away from its initial rejection, and now accepts that a potential chilling effect, even without evidence, is integral to deciding whether online news media should be liable for reader comments. Finally, the article argues that this latter view is consistent with the Court’s precedent in other areas of freedom of expression law where a similar chilling effect may also arise.}, keywords = {chilling effect, defamation, electronic publishing, Freedom of expression, frontpage, Human rights, liability, online reader comments}, }

Welcome to the Jungle: the Liability of Internet Intermediaries for Privacy Violations in Europe external link

JIPITEC, num: 3, pp: 211-228., 2016

Abstract

In Europe, roughly three regimes apply to the liability of Internet intermediaries for privacy violations conducted by users through their network. These are: the e-Commerce Directive, which, under certain conditions, excludes them from liability; the Data Protection Directive, which imposes a number of duties and responsibilities on providers processing personal data; and the freedom of expression, contained inter alia in the ECHR, which, under certain conditions, grants Internet providers several privileges and freedoms. Each doctrine has its own field of application, but they also have partial overlap. In practice, this creates legal inequality and uncertainty, especially with regard to providers that host online platforms and process User Generated Content.

Data protection, ECHR, Freedom of expression, Grondrechten, intermediaries, liability, Privacy

Bibtex

Article{nokey, title = {Welcome to the Jungle: the Liability of Internet Intermediaries for Privacy Violations in Europe}, author = {van der Sloot, B.}, url = {http://www.ivir.nl/publicaties/download/1720.pdf}, year = {0119}, date = {2016-01-19}, journal = {JIPITEC}, number = {3}, abstract = {In Europe, roughly three regimes apply to the liability of Internet intermediaries for privacy violations conducted by users through their network. These are: the e-Commerce Directive, which, under certain conditions, excludes them from liability; the Data Protection Directive, which imposes a number of duties and responsibilities on providers processing personal data; and the freedom of expression, contained inter alia in the ECHR, which, under certain conditions, grants Internet providers several privileges and freedoms. Each doctrine has its own field of application, but they also have partial overlap. In practice, this creates legal inequality and uncertainty, especially with regard to providers that host online platforms and process User Generated Content.}, keywords = {Data protection, ECHR, Freedom of expression, Grondrechten, intermediaries, liability, Privacy}, }