Generative AI, Copyright and the AI Act external link

Abstract

This paper examines the copyright-relevant rules of the recently published Artificial Intelligence (AI) Act for the EU copyright acquis. The aim of the paper is to provide a critical overview of the relationship between the AI Act and EU copyright law, while highlighting potential gray areas and blind spots for legal interpretation and future policy-making. The paper proceeds as follows. After a short introduction, Section 2 outlines the basic copyright issues of generative AI and the relevant copyright acquis rules that interface with the AI Act. It mentions potential copyright issues with the input or training stage, the model, and outputs. The AI Act rules are mostly relevant for the training of AI models, and the Regulation primarily interfaces with the text and data mining (TDM) exceptions in Articles 3 and 4 of the Copyright in the Digital Single Market Directive (CDSMD). Section 3 then briefly explains the AI Act’s structure and core definitions as they pertain to copyright law. Section 4 is the heart of the paper. It covers in some detail the interface between the AI Act and EU copyright law, namely: the clarification that TDM is involved in training AI models (4.1); the outline of the key copyright obligations in the AI Act (4.2); the obligation to put in place policies to respect copyright law, especially regarding TDM opt-outs (4.3); the projected extraterritorial effect of such obligations (4.4); the transparency obligations (4.5); how the AI Act envisions compliance with such obligations (4.6); and potential enforcement and remedies (4.7). Section 5 offers some concluding remarks, focusing on the inadequacy of the current regime to address one of its main concerns: the fair remuneration of authors and performers.

AI Act, Content moderation, Copyright, DSA, Generative AI, text and data mining, Transparency

Bibtex

Working paper{nokey, title = {Generative AI, Copyright and the AI Act}, author = {Quintais, J.}, url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4912701}, year = {2024}, date = {2024-08-01}, abstract = {This paper examines the copyright-relevant rules of the recently published Artificial Intelligence (AI) Act for the EU copyright acquis. The aim of the paper is to provide a critical overview of the relationship between the AI Act and EU copyright law, while highlighting potential gray areas and blind spots for legal interpretation and future policy-making. The paper proceeds as follows. After a short introduction, Section 2 outlines the basic copyright issues of generative AI and the relevant copyright acquis rules that interface with the AI Act. It mentions potential copyright issues with the input or training stage, the model, and outputs. The AI Act rules are mostly relevant for the training of AI models, and the Regulation primarily interfaces with the text and data mining (TDM) exceptions in Articles 3 and 4 of the Copyright in the Digital Single Market Directive (CDSMD). Section 3 then briefly explains the AI Act’s structure and core definitions as they pertain to copyright law. Section 4 is the heart of the paper. It covers in some detail the interface between the AI Act and EU copyright law, namely: the clarification that TDM is involved in training AI models (4.1); the outline of the key copyright obligations in the AI Act (4.2); the obligation to put in place policies to respect copyright law, especially regarding TDM opt-outs (4.3); the projected extraterritorial effect of such obligations (4.4); the transparency obligations (4.5); how the AI Act envisions compliance with such obligations (4.6); and potential enforcement and remedies (4.7). Section 5 offers some concluding remarks, focusing on the inadequacy of the current regime to address one of its main concerns: the fair remuneration of authors and performers.}, keywords = {AI Act, Content moderation, Copyright, DSA, Generative AI, text and data mining, Transparency}, }

Anticipating impacts: using large‑scale scenario‑writing to explore diverse implications of generative AI in the news environment

Kieslich, K., Diakopoulos, N. & Helberger, N.
AI and Ethics, 2024

Abstract

The tremendous rise of generative AI has reached every part of society—including the news environment. There are many concerns about the individual and societal impact of the increasing use of generative AI, including issues such as disinformation and misinformation, discrimination, and the promotion of social tensions. However, research on anticipating the impact of generative AI is still in its infancy and mostly limited to the views of technology developers and/or researchers. In this paper, we aim to broaden the perspective and capture the expectations of three stakeholder groups (news consumers; technology developers; content creators) about the potential negative impacts of generative AI, as well as mitigation strategies to address these. Methodologically, we apply scenario-writing and use participatory foresight in the context of a survey (n=119) to delve into cognitively diverse imaginations of the future. We qualitatively analyze the scenarios using thematic analysis to systematically map potential impacts of generative AI on the news environment, potential mitigation strategies, and the role of stakeholders in causing and mitigating these impacts. In addition, we measure respondents' opinions on a specifc mitigation strategy, namely transparency obligations as suggested in Article 52 of the draft EU AI Act. We compare the results across diferent stakeholder groups and elaborate on diferent expected impacts across these groups. We conclude by discussing the usefulness of scenario-writing and participatory foresight as a toolbox for generative AI impact assessment.

Generative AI

Bibtex

article{nokey, title = {Anticipating impacts: using large‑scale scenario‑writing to explore diverse implications of generative AI in the news environment}, author = {Kieslich, K. and Diakopoulos, N. and Helberger, N.}, doi = {https://doi.org/10.1007/s43681-024-00497-4}, year = {2024}, date = {2024-05-27}, journal = {AI and Ethics}, abstract = {The tremendous rise of generative AI has reached every part of society—including the news environment. There are many concerns about the individual and societal impact of the increasing use of generative AI, including issues such as disinformation and misinformation, discrimination, and the promotion of social tensions. However, research on anticipating the impact of generative AI is still in its infancy and mostly limited to the views of technology developers and/or researchers. In this paper, we aim to broaden the perspective and capture the expectations of three stakeholder groups (news consumers; technology developers; content creators) about the potential negative impacts of generative AI, as well as mitigation strategies to address these. Methodologically, we apply scenario-writing and use participatory foresight in the context of a survey (n=119) to delve into cognitively diverse imaginations of the future. We qualitatively analyze the scenarios using thematic analysis to systematically map potential impacts of generative AI on the news environment, potential mitigation strategies, and the role of stakeholders in causing and mitigating these impacts. In addition, we measure respondents\' opinions on a specifc mitigation strategy, namely transparency obligations as suggested in Article 52 of the draft EU AI Act. We compare the results across diferent stakeholder groups and elaborate on diferent expected impacts across these groups. We conclude by discussing the usefulness of scenario-writing and participatory foresight as a toolbox for generative AI impact assessment.}, keywords = {Generative AI}, }