How to design data access for researchers: A legal and software development perspective

Drunen, M. van & Noroozian, A.
Computer Law & Security Review, vol. 52, 2024

Abstract

Public scrutiny of platforms has been limited by a lack of transparency. In response, EU law increasingly requires platforms to provide data to researchers. The Digital Services Act and the proposed Regulation on the Transparency and Targeting of Political Advertising in particular require platforms to provide access to data through ad libraries and in response to data access requests. However, these obligations leave platforms considerable discretion to determine how access to data is provided. As the history of platforms’ self-regulated data access projects shows, the technical choices involved in designing data access significantly affect how researchers can use the provided data to scrutinise platforms. Ignoring the way data access is designed therefore creates a danger that platforms’ ability to limit research into their services simply shifts from controlling what data is available to researchers, to how data access is provided. This article explores how the Digital Services Act and proposed Political Advertising Regulation should be used to control the operationalisation of data access obligations that enable researchers to scrutinise platforms. It argues the operationalisation of data access regimes should not only be seen as a legal problem, but also as a software design problem. To that end it explores how software development principles may inform the operationalisation of data access obligations. The article closes by exploring the legal mechanisms available in the Digital Services Act and proposed Political Advertising Regulation to exercise control over the design of data access regimes, and makes five recommendations for ways in which these mechanisms should be used to enable research into platforms.

data access, DSA, Platforms, Transparency

Bibtex

Article{nokey, title = {How to design data access for researchers: A legal and software development perspective}, author = {Drunen, M. van and Noroozian, A.}, doi = {https://doi.org/10.1016/j.clsr.2024.105946}, year = {2024}, date = {2024-04-01}, journal = {Computer Law & Security Review}, volume = {52}, pages = {}, abstract = {Public scrutiny of platforms has been limited by a lack of transparency. In response, EU law increasingly requires platforms to provide data to researchers. The Digital Services Act and the proposed Regulation on the Transparency and Targeting of Political Advertising in particular require platforms to provide access to data through ad libraries and in response to data access requests. However, these obligations leave platforms considerable discretion to determine how access to data is provided. As the history of platforms’ self-regulated data access projects shows, the technical choices involved in designing data access significantly affect how researchers can use the provided data to scrutinise platforms. Ignoring the way data access is designed therefore creates a danger that platforms’ ability to limit research into their services simply shifts from controlling what data is available to researchers, to how data access is provided. This article explores how the Digital Services Act and proposed Political Advertising Regulation should be used to control the operationalisation of data access obligations that enable researchers to scrutinise platforms. It argues the operationalisation of data access regimes should not only be seen as a legal problem, but also as a software design problem. To that end it explores how software development principles may inform the operationalisation of data access obligations. The article closes by exploring the legal mechanisms available in the Digital Services Act and proposed Political Advertising Regulation to exercise control over the design of data access regimes, and makes five recommendations for ways in which these mechanisms should be used to enable research into platforms.}, keywords = {data access, DSA, Platforms, Transparency}, }

Personal Data Stores and the GDPR’s lawful grounds for processing personal data

Janssen, H., Cobbe, J., Norval, C. & Singh, J.
2019

Abstract

Personal Data Stores (‘PDSs’) entail users having a (physical or virtual) device within which they themselves can, in theory, capture, aggregate, and control the access to and the transfer of personal data. Their aim is to empower users in relation to their personal data, strengthening their opportunities for data protection, privacy, and/or to facilitate trade and monetisation. As PDS technologies develop, it is important to consider their role in relation to issues of data protection. The General Data Protection Regulation requires that the processing of user data be predicated on one of its defined lawful bases, whereby the Regulation does not favour any one basis over another. We explore how PDS architectures relate to these lawful bases, and observe that they tend to favour the bases that require direct user involvement. This paper considers issues that the envisaged architectural choices surrounding the lawful grounds may entail.

Data protection, decentralisation, lawful grounds for processing, personal data stores, Privacy, Transparency

Bibtex

Conference paper{nokey, title = {Personal Data Stores and the GDPR’s lawful grounds for processing personal data}, author = {Janssen, H. and Cobbe, J. and Norval, C. and Singh, J.}, doi = {https://doi.org/10.5281/zenodo.3234902}, year = {2019}, date = {2019-05-29}, abstract = {Personal Data Stores (‘PDSs’) entail users having a (physical or virtual) device within which they themselves can, in theory, capture, aggregate, and control the access to and the transfer of personal data. Their aim is to empower users in relation to their personal data, strengthening their opportunities for data protection, privacy, and/or to facilitate trade and monetisation. As PDS technologies develop, it is important to consider their role in relation to issues of data protection. The General Data Protection Regulation requires that the processing of user data be predicated on one of its defined lawful bases, whereby the Regulation does not favour any one basis over another. We explore how PDS architectures relate to these lawful bases, and observe that they tend to favour the bases that require direct user involvement. This paper considers issues that the envisaged architectural choices surrounding the lawful grounds may entail.}, keywords = {Data protection, decentralisation, lawful grounds for processing, personal data stores, Privacy, Transparency}, }

Shielding citizens? Understanding the impact of political advertisement transparency information

Dobber, T., Kruikemeier, S., Helberger, N. & Goodman, E.
New Media & Society, 2023

Abstract

Online targeted advertising leverages an information asymmetry between the advertiser and the recipient. Policymakers in the European Union and the United States aim to decrease this asymmetry by requiring information transparency information alongside political advertisements, in the hope of activating citizens’ persuasion knowledge. However, the proposed regulations all present different directions with regard to the required content of transparency information. Consequently, not all proposed interventions will be (equally) effective. Moreover, there is a chance that transparent information has additional consequences, such as increasing privacy concerns or decreasing advertising effectiveness. Using an online experiment (N = 1331), this study addresses these challenges and finds that two regulatory interventions (DSA and HAA) increase persuasion knowledge, while the chance of raising privacy concerns or lowering advertisement effectiveness is present but slim. Results suggest transparency information interventions have some promise, but at the same time underline the limitations of user-facing transparency interventions.

information disclosures, online advertising, persuasion knowledge, political attitudes, Privacy, Transparency

Bibtex

Article{nokey, title = {Shielding citizens? Understanding the impact of political advertisement transparency information}, author = {Dobber, T. and Kruikemeier, S. and Helberger, N. and Goodman, E.}, doi = {https://doi.org/10.1177/14614448231157640}, year = {2023}, date = {2023-04-21}, journal = {New Media & Society}, abstract = {Online targeted advertising leverages an information asymmetry between the advertiser and the recipient. Policymakers in the European Union and the United States aim to decrease this asymmetry by requiring information transparency information alongside political advertisements, in the hope of activating citizens’ persuasion knowledge. However, the proposed regulations all present different directions with regard to the required content of transparency information. Consequently, not all proposed interventions will be (equally) effective. Moreover, there is a chance that transparent information has additional consequences, such as increasing privacy concerns or decreasing advertising effectiveness. Using an online experiment (N = 1331), this study addresses these challenges and finds that two regulatory interventions (DSA and HAA) increase persuasion knowledge, while the chance of raising privacy concerns or lowering advertisement effectiveness is present but slim. Results suggest transparency information interventions have some promise, but at the same time underline the limitations of user-facing transparency interventions.}, keywords = {information disclosures, online advertising, persuasion knowledge, political attitudes, Privacy, Transparency}, }

An end to shadow banning? Transparency rights in the Digital Services Act between content moderation and curation download

Computer Law & Security Review, vol. 48, 2023

Abstract

This paper offers a legal perspective on the phenomenon of shadow banning: content moderation sanctions which are undetectable to those affected. Drawing on recent social science research, it connects current concerns about shadow banning to novel visibility management techniques in content moderation, such as delisting and demotion. Conventional moderation techniques such as outright content removal or account suspension can be observed by those affected, but these new visibility often cannot. This lends newfound significance to the legal question of moderation transparency rights. The EU Digital Services Act (DSA) is analysed in this light, as the first major legislation to regulate transparency of visibility remedies. In effect, its due process framework prohibits shadow banning with only limited exceptions. In doing so, the DSA surfaces tensions between two competing models for content moderation: as rule-bound administration or as adversarial security conflict. I discuss possible interpretations and trade-offs for this regime, and then turn to a more fundamental problem: how to define visibility reduction as a category of content moderation actions. The concept of visibility reduction or ‘demotions’ is central to both the shadow banning imaginary and to the DSA's safeguards, but its meaning is far from straightforward. Responding to claims that demotion is entirely relative, and therefore not actionable as a category of content moderation sanctions, I show how visibility reduction can still be regulated when defined as ex post adjustments to engagement-based relevance scores. Still, regulating demotion in this way will not cover all exercises of ranking power, since it manifests not only in individual cases of moderation but also through structural acts of content curation; not just by reducing visibility, but by producing visibility.

content curation, Content moderation, DSA, Online platforms, Transparency

Bibtex

Article{nokey, title = {An end to shadow banning? Transparency rights in the Digital Services Act between content moderation and curation}, author = {Leerssen, P.}, url = {https://www.ivir.nl/publications/comment-an-end-to-shadow-banning-transparency-rights-in-the-digital-services-act-between-content-moderation-and-curation/endtoshadowbanning/}, doi = {https://doi.org/10.1016/j.clsr.2023.105790}, year = {2023}, date = {2023-04-11}, journal = {Computer Law & Security Review}, volume = {48}, pages = {}, abstract = {This paper offers a legal perspective on the phenomenon of shadow banning: content moderation sanctions which are undetectable to those affected. Drawing on recent social science research, it connects current concerns about shadow banning to novel visibility management techniques in content moderation, such as delisting and demotion. Conventional moderation techniques such as outright content removal or account suspension can be observed by those affected, but these new visibility often cannot. This lends newfound significance to the legal question of moderation transparency rights. The EU Digital Services Act (DSA) is analysed in this light, as the first major legislation to regulate transparency of visibility remedies. In effect, its due process framework prohibits shadow banning with only limited exceptions. In doing so, the DSA surfaces tensions between two competing models for content moderation: as rule-bound administration or as adversarial security conflict. I discuss possible interpretations and trade-offs for this regime, and then turn to a more fundamental problem: how to define visibility reduction as a category of content moderation actions. The concept of visibility reduction or ‘demotions’ is central to both the shadow banning imaginary and to the DSA\'s safeguards, but its meaning is far from straightforward. Responding to claims that demotion is entirely relative, and therefore not actionable as a category of content moderation sanctions, I show how visibility reduction can still be regulated when defined as ex post adjustments to engagement-based relevance scores. Still, regulating demotion in this way will not cover all exercises of ranking power, since it manifests not only in individual cases of moderation but also through structural acts of content curation; not just by reducing visibility, but by producing visibility.}, keywords = {content curation, Content moderation, DSA, Online platforms, Transparency}, }

Algorithms Off-limits? If digital trade law restricts access to source code of software then accountability will suffer external link

2022

Abstract

Free trade agreements are increasingly used to construct an additional layer of protection for source code of software. This comes in the shape of a new prohibition for governments to require access to, or transfer of, source code of software, subject to certain exceptions. A clause on software source code is also part and parcel of an ambitious set of new rules on trade-related aspects of electronic commerce currently negotiated by 86 members of the World Trade Organization. Our understanding to date of how such a commitment inside trade law impacts on governments right to regulate digital technologies and the policy space that is allowed under trade law is limited. Access to software source code is for example necessary to meet regulatory and judicial needs in order to ensure that digital technologies are in conformity with individuals’ human rights and societal values. This article will analyze the implications of such a source code clause for current and future digital policies by governments that aim to ensure transparency, fairness and accountability of computer and machine learning algorithms.

accountability, algorithms, application programming interfaces, auditability, Digital trade, fairness, frontpage, source code, Transparency

Bibtex

Article{Irion2022b, title = {Algorithms Off-limits? If digital trade law restricts access to source code of software then accountability will suffer}, author = {Irion, K.}, url = {https://www.ivir.nl/facct22-125-2/}, year = {0617}, date = {2022-06-17}, abstract = {Free trade agreements are increasingly used to construct an additional layer of protection for source code of software. This comes in the shape of a new prohibition for governments to require access to, or transfer of, source code of software, subject to certain exceptions. A clause on software source code is also part and parcel of an ambitious set of new rules on trade-related aspects of electronic commerce currently negotiated by 86 members of the World Trade Organization. Our understanding to date of how such a commitment inside trade law impacts on governments right to regulate digital technologies and the policy space that is allowed under trade law is limited. Access to software source code is for example necessary to meet regulatory and judicial needs in order to ensure that digital technologies are in conformity with individuals’ human rights and societal values. This article will analyze the implications of such a source code clause for current and future digital policies by governments that aim to ensure transparency, fairness and accountability of computer and machine learning algorithms.}, keywords = {accountability, algorithms, application programming interfaces, auditability, Digital trade, fairness, frontpage, source code, Transparency}, }

Panta Rhei: A European Perspective on Ensuring a High Level of Protection of Human Rights in a World in Which Everything Flows external link

Big Data and Global Trade Law, Cambridge University Press, 2021

Abstract

Human rights do remain valid currency in how we approach planetary-scale computation and accompanying data flows. Today’s system of human rights protection, however, is highly dependent on domestic legal institutions, which unravel faster than the reconstruction of fitting transnational governance institutions. The chapter takes a critical look at the construction of the data flow metaphor as a policy concept inside international trade law. Subsequently, it explores how the respect for human rights ties in with national constitutionalism that becomes increasingly challenged by the transnational dynamic of digital era transactions. Lastly, the chapter turns to international trade law and why its ambitions to govern cross-border data flows will likely not advance efforts to generate respect for human rights. In conclusion, the chapter advocates for a rebalancing act that recognizes human rights inside international trade law.

Artificial intelligence, EU law, frontpage, Human rights, Transparency, WTO law

Bibtex

Chapter{Irion2021bb, title = {Panta Rhei: A European Perspective on Ensuring a High Level of Protection of Human Rights in a World in Which Everything Flows}, author = {Irion, K.}, url = {https://www.cambridge.org/core/books/big-data-and-global-trade-law/panta-rhei/B0E5D7851240E0D2F4562B3C6DFF3011}, doi = {https://doi.org/https://doi.org/10.1017/9781108919234.015}, year = {2021}, date = {2021-07-05}, abstract = {Human rights do remain valid currency in how we approach planetary-scale computation and accompanying data flows. Today’s system of human rights protection, however, is highly dependent on domestic legal institutions, which unravel faster than the reconstruction of fitting transnational governance institutions. The chapter takes a critical look at the construction of the data flow metaphor as a policy concept inside international trade law. Subsequently, it explores how the respect for human rights ties in with national constitutionalism that becomes increasingly challenged by the transnational dynamic of digital era transactions. Lastly, the chapter turns to international trade law and why its ambitions to govern cross-border data flows will likely not advance efforts to generate respect for human rights. In conclusion, the chapter advocates for a rebalancing act that recognizes human rights inside international trade law.}, keywords = {Artificial intelligence, EU law, frontpage, Human rights, Transparency, WTO law}, }

Prospective Policy Study on Artificial Intelligence and EU Trade Policy external link

Irion, K. & Williams, J.
2020

Abstract

Artificial intelligence is poised to be 21st century’s most transformative general purpose technology that mankind ever availed itself of. Artificial intelligence is a catch-all for technologies that can carry out complex processes fairly independently by learning from data. In the form of popular digital services and products, applied artificial intelligence is seeping into our daily lives, for example, as personal digital assistants or as autopiloting of self-driving cars. This is just the beginning of a development over the course of which artificial intelligence will generate transformative products and services that will alter world trade patterns. Artificial intelligence holds enormous promise for our information civilization if we get the governance of artificial intelligence right. What makes artificial intelligence even more fascinating is that the technology can be deployed fairly location-independent. Cross-border trade in digital services which incorporate applied artificial intelligence into their software architecture is ever increasing. That brings artificial intelligence within the purview of international trade law, such as the General Agreement on Trade in Services (GATS) and ongoing negotiations at the World Trade Organization (WTO) on trade related aspects of electronic commerce. The Dutch Ministry of Foreign Affairs commissioned this study to generate knowledge about the interface between international trade law and European norms and values in the use of artificial intelligence.

Artificial intelligence, EU law, Human rights, Transparency, WTO law

Bibtex

Report{Irion2020b, title = {Prospective Policy Study on Artificial Intelligence and EU Trade Policy}, author = {Irion, K. and Williams, J.}, url = {https://www.ivir.nl/ivir_policy-paper_ai-study_online/https://www.ivir.nl/ivir_artificial-intelligence-and-eu-trade-policy-2/}, year = {2020}, date = {2020-01-21}, abstract = {Artificial intelligence is poised to be 21st century’s most transformative general purpose technology that mankind ever availed itself of. Artificial intelligence is a catch-all for technologies that can carry out complex processes fairly independently by learning from data. In the form of popular digital services and products, applied artificial intelligence is seeping into our daily lives, for example, as personal digital assistants or as autopiloting of self-driving cars. This is just the beginning of a development over the course of which artificial intelligence will generate transformative products and services that will alter world trade patterns. Artificial intelligence holds enormous promise for our information civilization if we get the governance of artificial intelligence right. What makes artificial intelligence even more fascinating is that the technology can be deployed fairly location-independent. Cross-border trade in digital services which incorporate applied artificial intelligence into their software architecture is ever increasing. That brings artificial intelligence within the purview of international trade law, such as the General Agreement on Trade in Services (GATS) and ongoing negotiations at the World Trade Organization (WTO) on trade related aspects of electronic commerce. The Dutch Ministry of Foreign Affairs commissioned this study to generate knowledge about the interface between international trade law and European norms and values in the use of artificial intelligence.}, keywords = {Artificial intelligence, EU law, Human rights, Transparency, WTO law}, }

Platform ad archives: promises and pitfalls external link

Leerssen, P., Ausloos, J., Zarouali, B., Helberger, N. & Vreese, C.H. de
Internet Policy Review, vol. 8, num: 4, 2019

Abstract

This paper discusses the new phenomenon of platform ad archives. Over the past year, leading social media platforms have installed publicly accessible databases documenting their political advertisements, and several countries have moved to regulate them. If designed and implemented properly, ad archives can correct for structural informational asymmetries in the online advertising industry, and thereby improve accountability through litigation and through publicity. However, present implementations leave much to be desired. We discuss key criticisms, suggest several improvements and identify areas for future research and debate.

Advertising, frontpage, Micro-targeting, Platforms, Politics, Technologie en recht, Transparency

Bibtex

Article{Leerssen2019b, title = {Platform ad archives: promises and pitfalls}, author = {Leerssen, P. and Ausloos, J. and Zarouali, B. and Helberger, N. and Vreese, C.H. de}, url = {https://policyreview.info/articles/analysis/platform-ad-archives-promises-and-pitfalls}, doi = {https://doi.org/10.14763/2019.4.1421}, year = {1010}, date = {2019-10-10}, journal = {Internet Policy Review}, volume = {8}, number = {4}, pages = {}, abstract = {This paper discusses the new phenomenon of platform ad archives. Over the past year, leading social media platforms have installed publicly accessible databases documenting their political advertisements, and several countries have moved to regulate them. If designed and implemented properly, ad archives can correct for structural informational asymmetries in the online advertising industry, and thereby improve accountability through litigation and through publicity. However, present implementations leave much to be desired. We discuss key criticisms, suggest several improvements and identify areas for future research and debate.}, keywords = {Advertising, frontpage, Micro-targeting, Platforms, Politics, Technologie en recht, Transparency}, }

Designing for the Better by Taking Users into Account: A Qualitative Evaluation of User Control Mechanisms in (News) Recommender Systems external link

Harambam, J., Bountouridis, D., Makhortykh, M. & van Hoboken, J.
RecSys'19: Proceedings of the 13th ACM Conference on Recommender Systems, pp: 69-77, 2019

Abstract

Recommender systems (RS) are on the rise in many domains. While they offer great promises, they also raise concerns: lack of transparency, reduction of diversity, little to no user control. In this paper, we align with the normative turn in computer science which scrutinizes the ethical and societal implications of RS. We focus and elaborate on the concept of user control because that mitigates multiple problems at once. Taking the news industry as our domain, we conducted four focus groups, or moderated think-aloud sessions, with Dutch news readers (N=21) to systematically study how people evaluate different control mechanisms (at the input, process, and output phase) in a News Recommender Prototype (NRP). While these mechanisms are sometimes met with distrust about the actual control they offer, we found that an intelligible user profile (including reading history and flexible preferences settings), coupled with possibilities to influence the recommendation algorithms is highly valued, especially when these control mechanisms can be operated in relation to achieving personal goals. By bringing (future) users' perspectives to the fore, this paper contributes to a richer understanding of why and how to design for user control in recommender systems.

diversity, filter bubble, frontpage, Mediarecht, recommender systems, Technologie en recht, Transparency

Bibtex

Article{Harambam2019b, title = {Designing for the Better by Taking Users into Account: A Qualitative Evaluation of User Control Mechanisms in (News) Recommender Systems}, author = {Harambam, J. and Bountouridis, D. and Makhortykh, M. and van Hoboken, J.}, url = {https://www.ivir.nl/publicaties/download/paper_recsys_19.pdf https://dl.acm.org/citation.cfm?id=3347014}, year = {0919}, date = {2019-09-19}, journal = {RecSys'19: Proceedings of the 13th ACM Conference on Recommender Systems}, abstract = {Recommender systems (RS) are on the rise in many domains. While they offer great promises, they also raise concerns: lack of transparency, reduction of diversity, little to no user control. In this paper, we align with the normative turn in computer science which scrutinizes the ethical and societal implications of RS. We focus and elaborate on the concept of user control because that mitigates multiple problems at once. Taking the news industry as our domain, we conducted four focus groups, or moderated think-aloud sessions, with Dutch news readers (N=21) to systematically study how people evaluate different control mechanisms (at the input, process, and output phase) in a News Recommender Prototype (NRP). While these mechanisms are sometimes met with distrust about the actual control they offer, we found that an intelligible user profile (including reading history and flexible preferences settings), coupled with possibilities to influence the recommendation algorithms is highly valued, especially when these control mechanisms can be operated in relation to achieving personal goals. By bringing (future) users\' perspectives to the fore, this paper contributes to a richer understanding of why and how to design for user control in recommender systems.}, keywords = {diversity, filter bubble, frontpage, Mediarecht, recommender systems, Technologie en recht, Transparency}, }