Machine readable or not? – notes on the hearing in LAION e.v. vs Kneschke external link

Kluwer Copyright Blog, 2024

Artificial intelligence, Germany, text and data mining

Bibtex

Online publication{nokey, title = {Machine readable or not? – notes on the hearing in LAION e.v. vs Kneschke}, author = {Keller, P.}, url = {https://copyrightblog.kluweriplaw.com/2024/07/22/machine-readable-or-not-notes-on-the-hearing-in-laion-e-v-vs-kneschke/}, year = {2024}, date = {2024-07-22}, journal = {Kluwer Copyright Blog}, keywords = {Artificial intelligence, Germany, text and data mining}, }

How the EU Outsources the Task of Human Rights Protection to Platforms and Users: The Case of UGC Monetization external link

Senftleben, M., Quintais, J. & Meiring, A.
Berkeley Technology Law Journal, vol. 38, iss. : 3, pp: 933-1010, 2024

Abstract

With the shift from the traditional safe harbor for hosting to statutory content filtering and licensing obligations, EU copyright law has substantially curtailed the freedom of users to upload and share their content creations. Seeking to avoid overbroad inroads into freedom of expression, EU law obliges online platforms and the creative industry to take into account human rights when coordinating their content filtering actions. Platforms must also establish complaint and redress procedures for users. The European Commission will initiate stakeholder dialogues to identify best practices. These “safety valves” in the legislative package, however, are mere fig leaves. Instead of safeguarding human rights, the EU legislator outsources human rights obligations to the platform industry. At the same time, the burden of policing content moderation systems is imposed on users who are unlikely to bring complaints in each individual case. The new legislative design in the EU will thus “conceal” human rights violations instead of bringing them to light. Nonetheless, the DSA rests on the same – highly problematic – approach. Against this background, the paper discusses the weakening – and potential loss – of fundamental freedoms as a result of the departure from the traditional notice-and-takedown approach. Adding a new element to the ongoing debate on content licensing and filtering, the analysis will devote particular attention to the fact that EU law, for the most part, has left untouched the private power of platforms to determine the “house rules” governing the most popular copyright-owner reaction to detected matches between protected works and content uploads: the (algorithmic) monetization of that content. Addressing the “legal vacuum” in the field of content monetization, the analysis explores outsourcing and concealment risks in this unregulated space. Focusing on large-scale platforms for user-generated content, such as YouTube, Instagram and TikTok, two normative problems come to the fore: (1) the fact that rightholders, when opting for monetization, de facto monetize not only their own rights but also the creative input of users; (2) the fact that user creativity remains unremunerated as long as the monetization option is only available to rightholders. As a result of this configuration, the monetization mechanism disregards users’ right to (intellectual) property and discriminates against user creativity. Against this background, we discuss whether the DSA provisions that seek to ensure transparency of content moderation actions and terms and conditions offer useful sources of information that could empower users. Moreover, we raise the question whether the detailed regulation of platform actions in the DSA may resolve the described human rights dilemmas to some extent.

Artificial intelligence, Content moderation, Copyright, derivative works, discrimination, Freedom of expression, Human rights, liability, proportionality, user-generated content

Bibtex

Article{nokey, title = {How the EU Outsources the Task of Human Rights Protection to Platforms and Users: The Case of UGC Monetization}, author = {Senftleben, M. and Quintais, J. and Meiring, A.}, url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4421150}, year = {2024}, date = {2024-01-23}, journal = {Berkeley Technology Law Journal}, volume = {38}, issue = {3}, pages = {933-1010}, abstract = {With the shift from the traditional safe harbor for hosting to statutory content filtering and licensing obligations, EU copyright law has substantially curtailed the freedom of users to upload and share their content creations. Seeking to avoid overbroad inroads into freedom of expression, EU law obliges online platforms and the creative industry to take into account human rights when coordinating their content filtering actions. Platforms must also establish complaint and redress procedures for users. The European Commission will initiate stakeholder dialogues to identify best practices. These “safety valves” in the legislative package, however, are mere fig leaves. Instead of safeguarding human rights, the EU legislator outsources human rights obligations to the platform industry. At the same time, the burden of policing content moderation systems is imposed on users who are unlikely to bring complaints in each individual case. The new legislative design in the EU will thus “conceal” human rights violations instead of bringing them to light. Nonetheless, the DSA rests on the same – highly problematic – approach. Against this background, the paper discusses the weakening – and potential loss – of fundamental freedoms as a result of the departure from the traditional notice-and-takedown approach. Adding a new element to the ongoing debate on content licensing and filtering, the analysis will devote particular attention to the fact that EU law, for the most part, has left untouched the private power of platforms to determine the “house rules” governing the most popular copyright-owner reaction to detected matches between protected works and content uploads: the (algorithmic) monetization of that content. Addressing the “legal vacuum” in the field of content monetization, the analysis explores outsourcing and concealment risks in this unregulated space. Focusing on large-scale platforms for user-generated content, such as YouTube, Instagram and TikTok, two normative problems come to the fore: (1) the fact that rightholders, when opting for monetization, de facto monetize not only their own rights but also the creative input of users; (2) the fact that user creativity remains unremunerated as long as the monetization option is only available to rightholders. As a result of this configuration, the monetization mechanism disregards users’ right to (intellectual) property and discriminates against user creativity. Against this background, we discuss whether the DSA provisions that seek to ensure transparency of content moderation actions and terms and conditions offer useful sources of information that could empower users. Moreover, we raise the question whether the detailed regulation of platform actions in the DSA may resolve the described human rights dilemmas to some extent.}, keywords = {Artificial intelligence, Content moderation, Copyright, derivative works, discrimination, Freedom of expression, Human rights, liability, proportionality, user-generated content}, }

EU copyright law round up – fourth trimester of 2023 external link

Trapova, A. & Quintais, J.
Kluwer Copyright Blog, 2024

Artificial intelligence, Copyright, EU

Bibtex

Online publication{nokey, title = {EU copyright law round up – fourth trimester of 2023}, author = {Trapova, A. and Quintais, J.}, url = {https://copyrightblog.kluweriplaw.com/2024/01/04/eu-copyright-law-round-up-fourth-trimester-of-2023/}, year = {2024}, date = {2024-01-04}, journal = {Kluwer Copyright Blog}, keywords = {Artificial intelligence, Copyright, EU}, }

Artificiële Intelligentie: waar is de werkelijkheid gebleven? download

Computerrecht, iss. : 6, num: 258, pp: 476-483, 2023

Abstract

Er is veel ophef ontstaan over de (te) snelle toepassing van AI in de samenleving. Dit artikel onderzoekt wat AI (in het bijzonder ChatGPT) is. Vervolgens laat het zien waar de invoering van AI al direct wringt in de gebieden van het auteursrecht, de privacy, vrijheid van meningsuiting, openbare besluitvorming en mededingingsrecht. Daarna wordt stilgestaan bij de vraag of de AI-verordening van de EU daar het antwoord op zal zijn. De conclusie is dat dat maar zeer ten dele zo is. Bescherming zal dus moeten komen van normen uit de deelgebieden. Het artikel formuleert tot slot vier beginselen die in ieder deelgebied een AI ‘metakader’ kunnen vormen waarmee een AI-product moet worden beoordeeld.

Artificial intelligence

Bibtex

Article{nokey, title = {Artificiële Intelligentie: waar is de werkelijkheid gebleven?}, author = {Dommering, E.}, url = {https://www.ivir.nl/publications/artificiele-intelligentie-waar-is-de-werkelijkheid-gebleven/ai-computerrecht-2023/}, year = {2023}, date = {2023-12-05}, journal = {Computerrecht}, issue = {6}, number = {258}, abstract = {Er is veel ophef ontstaan over de (te) snelle toepassing van AI in de samenleving. Dit artikel onderzoekt wat AI (in het bijzonder ChatGPT) is. Vervolgens laat het zien waar de invoering van AI al direct wringt in de gebieden van het auteursrecht, de privacy, vrijheid van meningsuiting, openbare besluitvorming en mededingingsrecht. Daarna wordt stilgestaan bij de vraag of de AI-verordening van de EU daar het antwoord op zal zijn. De conclusie is dat dat maar zeer ten dele zo is. Bescherming zal dus moeten komen van normen uit de deelgebieden. Het artikel formuleert tot slot vier beginselen die in ieder deelgebied een AI ‘metakader’ kunnen vormen waarmee een AI-product moet worden beoordeeld.}, keywords = {Artificial intelligence}, }

An Interdisciplinary Toolbox for Researching the AI-Act external link

Verfassungsblog, 2023

Artificial intelligence

Bibtex

Online publication{nokey, title = {An Interdisciplinary Toolbox for Researching the AI-Act}, author = {Metikoš, L.}, url = {https://verfassungsblog.de/an-interdisciplinary-toolbox-for-researching-the-ai-act/}, doi = {https://doi.org/10.17176/20230908-062850-0}, year = {2023}, date = {2023-09-08}, journal = {Verfassungsblog}, keywords = {Artificial intelligence}, }

Generative AI, Copyright and the AI Act external link

Kluwer Copyright Blog, 2023

Abstract

Generative AI is one of the hot topics in copyright law today. In the EU, a crucial legal issue is whether using in-copyright works to train generative AI models is copyright infringement or falls under existing text and data mining (TDM) exceptions in the Copyright in Digital Single Market (CDSM) Directive. In particular, Article 4 CDSM Directive contains a so-called “commercial” TDM exception, which provides an “opt-out” mechanism for rights holders. This opt-out can be exercised for instance via technological tools but relies significantly on the public availability of training datasets. This has led to increasing calls for transparency requirements. In response to these calls, the European Parliament is considering adding to its compromise version of the AI Act two specific obligations with copyright implications on providers of generative AI models: on (1) transparency and disclosure; and (2) on safeguards for AI-generated content moderation. There is room for improvement on both.

Artificial intelligence, Copyright

Bibtex

Online publication{nokey, title = {Generative AI, Copyright and the AI Act}, author = {Quintais, J.}, url = {https://copyrightblog.kluweriplaw.com/2023/05/09/generative-ai-copyright-and-the-ai-act/}, year = {2023}, date = {2023-05-09}, journal = {Kluwer Copyright Blog}, abstract = {Generative AI is one of the hot topics in copyright law today. In the EU, a crucial legal issue is whether using in-copyright works to train generative AI models is copyright infringement or falls under existing text and data mining (TDM) exceptions in the Copyright in Digital Single Market (CDSM) Directive. In particular, Article 4 CDSM Directive contains a so-called “commercial” TDM exception, which provides an “opt-out” mechanism for rights holders. This opt-out can be exercised for instance via technological tools but relies significantly on the public availability of training datasets. This has led to increasing calls for transparency requirements. In response to these calls, the European Parliament is considering adding to its compromise version of the AI Act two specific obligations with copyright implications on providers of generative AI models: on (1) transparency and disclosure; and (2) on safeguards for AI-generated content moderation. There is room for improvement on both.}, keywords = {Artificial intelligence, Copyright}, }

A Primer and FAQ on Copyright Law and Generative AI for News Media external link

Quintais, J. & Diakopoulos, N.
2023

Artificial intelligence, Copyright, Media law, news

Bibtex

Online publication{nokey, title = {A Primer and FAQ on Copyright Law and Generative AI for News Media}, author = {Quintais, J. and Diakopoulos, N.}, url = {https://generative-ai-newsroom.com/a-primer-and-faq-on-copyright-law-and-generative-ai-for-news-media-f1349f514883}, year = {2023}, date = {2023-04-26}, keywords = {Artificial intelligence, Copyright, Media law, news}, }

Outsourcing Human Rights Obligations and Concealing Human Rights Deficits: The Example of Monetizing User-Generated Content Under the CDSM Directive and the Digital Services Act external link

Senftleben, M., Quintais, J. & Meiring, A.

Abstract

With the shift from the traditional safe harbor for hosting to statutory content filtering and licensing obligations, EU copyright law has substantially curtailed the freedom of users to upload and share their content creations. Seeking to avoid overbroad inroads into freedom of expression, EU law obliges online platforms and the creative industry to take into account human rights when coordinating their content filtering actions. Platforms must also establish complaint and redress procedures for users. The European Commission will initiate stakeholder dialogues to identify best practices. These “safety valves” in the legislative package, however, are mere fig leaves. Instead of safeguarding human rights, the EU legislator outsources human rights obligations to the platform industry. At the same time, the burden of policing content moderation systems is imposed on users who are unlikely to bring complaints in each individual case. The new legislative design in the EU will thus “conceal” human rights violations instead of bringing them to light. Nonetheless, the DSA rests on the same – highly problematic – approach. Against this background, the paper discusses the weakening – and potential loss – of fundamental freedoms as a result of the departure from the traditional notice-and-takedown approach. Adding a new element to the ongoing debate on content licensing and filtering, the analysis will devote particular attention to the fact that EU law, for the most part, has left untouched the private power of platforms to determine the “house rules” governing the most popular copyright-owner reaction to detected matches between protected works and content uploads: the (algorithmic) monetization of that content. Addressing the “legal vacuum” in the field of content monetization, the analysis explores outsourcing and concealment risks in this unregulated space. Focusing on large-scale platforms for user-generated content, such as YouTube, Instagram and TikTok, two normative problems come to the fore: (1) the fact that rightholders, when opting for monetization, de facto monetize not only their own rights but also the creative input of users; (2) the fact that user creativity remains unremunerated as long as the monetization option is only available to rightholders. As a result of this configuration, the monetization mechanism disregards users’ right to (intellectual) property and discriminates against user creativity. Against this background, we discuss whether the DSA provisions that seek to ensure transparency of content moderation actions and terms and conditions offer useful sources of information that could empower users. Moreover, we raise the question whether the detailed regulation of platform actions in the DSA may resolve the described human rights dilemmas to some extent.

Artificial intelligence, Content moderation, Copyright, derivative works, discrimination, Freedom of expression, Human rights, liability, user-generated content

Bibtex

Online publication{nokey, title = {Outsourcing Human Rights Obligations and Concealing Human Rights Deficits: The Example of Monetizing User-Generated Content Under the CDSM Directive and the Digital Services Act}, author = {Senftleben, M. and Quintais, J. and Meiring, A.}, url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4421150}, year = {}, date = {DATE ERROR: pub_date = }, abstract = {With the shift from the traditional safe harbor for hosting to statutory content filtering and licensing obligations, EU copyright law has substantially curtailed the freedom of users to upload and share their content creations. Seeking to avoid overbroad inroads into freedom of expression, EU law obliges online platforms and the creative industry to take into account human rights when coordinating their content filtering actions. Platforms must also establish complaint and redress procedures for users. The European Commission will initiate stakeholder dialogues to identify best practices. These “safety valves” in the legislative package, however, are mere fig leaves. Instead of safeguarding human rights, the EU legislator outsources human rights obligations to the platform industry. At the same time, the burden of policing content moderation systems is imposed on users who are unlikely to bring complaints in each individual case. The new legislative design in the EU will thus “conceal” human rights violations instead of bringing them to light. Nonetheless, the DSA rests on the same – highly problematic – approach. Against this background, the paper discusses the weakening – and potential loss – of fundamental freedoms as a result of the departure from the traditional notice-and-takedown approach. Adding a new element to the ongoing debate on content licensing and filtering, the analysis will devote particular attention to the fact that EU law, for the most part, has left untouched the private power of platforms to determine the “house rules” governing the most popular copyright-owner reaction to detected matches between protected works and content uploads: the (algorithmic) monetization of that content. Addressing the “legal vacuum” in the field of content monetization, the analysis explores outsourcing and concealment risks in this unregulated space. Focusing on large-scale platforms for user-generated content, such as YouTube, Instagram and TikTok, two normative problems come to the fore: (1) the fact that rightholders, when opting for monetization, de facto monetize not only their own rights but also the creative input of users; (2) the fact that user creativity remains unremunerated as long as the monetization option is only available to rightholders. As a result of this configuration, the monetization mechanism disregards users’ right to (intellectual) property and discriminates against user creativity. Against this background, we discuss whether the DSA provisions that seek to ensure transparency of content moderation actions and terms and conditions offer useful sources of information that could empower users. Moreover, we raise the question whether the detailed regulation of platform actions in the DSA may resolve the described human rights dilemmas to some extent.}, keywords = {Artificial intelligence, Content moderation, Copyright, derivative works, discrimination, Freedom of expression, Human rights, liability, user-generated content}, }

ChatGPT and the AI Act external link

Helberger, N. & Diakopoulos, N.
Internet Policy Review, vol. 12, iss. : 1, 2023

Abstract

It is not easy being a tech regulator these days. The European institutions are working hard towards finalising the AI Act in autumn, and then generative AI systems like ChatGPT come along! In this essay, we comment the European AI Act by arguing that its current risk-based approach is too limited for facing ChatGPT & co.

Artificial intelligence, ChatGPT

Bibtex

Article{nokey, title = {ChatGPT and the AI Act}, author = {Helberger, N. and Diakopoulos, N.}, url = {https://policyreview.info/essay/chatgpt-and-ai-act}, doi = {https://doi.org/10.14763/2023.1.1682}, year = {2023}, date = {2023-02-16}, journal = {Internet Policy Review}, volume = {12}, issue = {1}, pages = {}, abstract = {It is not easy being a tech regulator these days. The European institutions are working hard towards finalising the AI Act in autumn, and then generative AI systems like ChatGPT come along! In this essay, we comment the European AI Act by arguing that its current risk-based approach is too limited for facing ChatGPT & co.}, keywords = {Artificial intelligence, ChatGPT}, }

Towards a Normative Perspective on Journalistic
AI: Embracing the Messy Reality of Normative
Ideals
download

Helberger, N., Drunen, M. van, Möller, J., Vrijenhoek, S. & Eskens, S.
Digital Journalism, vol. 10, iss. : 10, pp: 1605-1626, 2022

Abstract

Few would disagree that AI systems and applications need to be “responsible,” but what is “responsible” and how to answer that question? Answering that question requires a normative perspective on the role of journalistic AI and the values it shall serve. Such a perspective needs to be grounded in a broader normative framework and a thorough understanding of the dynamics and complexities of journalistic AI at the level of people, newsrooms and media markets. This special issue aims to develop such a normative perspective on the use of AI-driven tools in journalism and the role of digital journalism studies in advancing that perspective. The contributions in this special issue combine conceptual, organisational and empirical angles to study the challenges involved in actively using AI to promote editorial values, the powers at play, the role of economic and regulatory conditions, and ways of bridging academic ideals and the messy reality of the real world. This editorial brings the different contributions into conversation, situates them in the broader digital journalism studies scholarship and identifies seven key-take aways.

Artificial intelligence, governance, Journalism, Media law, normative perspective, professional values, Regulation

Bibtex

Article{nokey, title = {Towards a Normative Perspective on JournalisticAI: Embracing the Messy Reality of NormativeIdeals}, author = {Helberger, N. and Drunen, M. van and Möller, J. and Vrijenhoek, S. and Eskens, S.}, url = {https://www.ivir.nl/publications/towards-a-normative-perspective-on-journalisticai-embracing-the-messy-reality-of-normativeideals/digital_journalism_2022_10/}, doi = {https://doi.org/10.1080/21670811.2022.2152195}, year = {2022}, date = {2022-12-22}, journal = {Digital Journalism}, volume = {10}, issue = {10}, pages = {1605-1626}, abstract = {Few would disagree that AI systems and applications need to be “responsible,” but what is “responsible” and how to answer that question? Answering that question requires a normative perspective on the role of journalistic AI and the values it shall serve. Such a perspective needs to be grounded in a broader normative framework and a thorough understanding of the dynamics and complexities of journalistic AI at the level of people, newsrooms and media markets. This special issue aims to develop such a normative perspective on the use of AI-driven tools in journalism and the role of digital journalism studies in advancing that perspective. The contributions in this special issue combine conceptual, organisational and empirical angles to study the challenges involved in actively using AI to promote editorial values, the powers at play, the role of economic and regulatory conditions, and ways of bridging academic ideals and the messy reality of the real world. This editorial brings the different contributions into conversation, situates them in the broader digital journalism studies scholarship and identifies seven key-take aways.}, keywords = {Artificial intelligence, governance, Journalism, Media law, normative perspective, professional values, Regulation}, }