Social Welfare, Risk Profiling and Fundamental Rights: The Case of SyRI in the Netherlands external link

JIPITEC, vol. 12, num: 4, pp: 257-271, 2021

Abstract

This article discusses the use of automated decisioning-making (ADM) systems by public administrative bodies, particularly systems designed to combat social-welfare fraud, from a European fundamental rights law perspective. The article begins by outlining the emerging fundamental rights issues in relation to ADM systems used by public administrative bodies. Building upon this, the article critically analyses a recent landmark judgment from the Netherlands and uses this as a case study for discussion of the application of fundamental rights law to ADM systems by public authorities more generally. In the so-called SyRI judgment, the District Court of The Hague held that a controversial automated welfare-fraud detection system (SyRI), which allows the linking and analysing of data from an array of government agencies to generate fraud-risk reports on people, violated the right to private life, guaranteed under Article 8 of the European Convention on Human Rights (ECHR). The Court held that SyRI was insufficiently transparent, and contained insufficient safeguards, to protect the right to privacy, in violation of Article 8 ECHR. This was one of the first times an ADM system being used by welfare authorities has been halted on the basis of Article 8 ECHR. The article critically analyses the SyRI judgment from a fundamental rights perspective, including by examining how the Court brought principles contained in the General Data Protection Regulation within the rubric of Article 8 ECHR as well as the importance the Court attaches to the principle of transparency under Article 8 ECHR. Finally, the article discusses how the Dutch government responded to the judgment. and discusses proposed new legislation, which is arguably more invasive, with the article concluding with some lessons that can be drawn for the broader policy and legal debate on ADM systems used by public authorities. implications.

automated decision making, frontpage, fundamentele rechten, Grondrechten, Mensenrechten, nederland, SyRI-wetgeving

Bibtex

Article{nokey, title = {Social Welfare, Risk Profiling and Fundamental Rights: The Case of SyRI in the Netherlands}, author = {Appelman, N. and Fahy, R. and van Hoboken, J.}, url = {https://www.ivir.nl/publicaties/download/jipitec_2021_4.pdf https://www.jipitec.eu/issues/jipitec-12-4-2021/5407}, year = {1216}, date = {2021-12-16}, journal = {JIPITEC}, volume = {12}, number = {4}, pages = {257-271}, abstract = {This article discusses the use of automated decisioning-making (ADM) systems by public administrative bodies, particularly systems designed to combat social-welfare fraud, from a European fundamental rights law perspective. The article begins by outlining the emerging fundamental rights issues in relation to ADM systems used by public administrative bodies. Building upon this, the article critically analyses a recent landmark judgment from the Netherlands and uses this as a case study for discussion of the application of fundamental rights law to ADM systems by public authorities more generally. In the so-called SyRI judgment, the District Court of The Hague held that a controversial automated welfare-fraud detection system (SyRI), which allows the linking and analysing of data from an array of government agencies to generate fraud-risk reports on people, violated the right to private life, guaranteed under Article 8 of the European Convention on Human Rights (ECHR). The Court held that SyRI was insufficiently transparent, and contained insufficient safeguards, to protect the right to privacy, in violation of Article 8 ECHR. This was one of the first times an ADM system being used by welfare authorities has been halted on the basis of Article 8 ECHR. The article critically analyses the SyRI judgment from a fundamental rights perspective, including by examining how the Court brought principles contained in the General Data Protection Regulation within the rubric of Article 8 ECHR as well as the importance the Court attaches to the principle of transparency under Article 8 ECHR. Finally, the article discusses how the Dutch government responded to the judgment. and discusses proposed new legislation, which is arguably more invasive, with the article concluding with some lessons that can be drawn for the broader policy and legal debate on ADM systems used by public authorities. implications.}, keywords = {automated decision making, frontpage, fundamentele rechten, Grondrechten, Mensenrechten, nederland, SyRI-wetgeving}, }

Editorial independence in an automated media system external link

Internet Policy Review, vol. 10, num: 3, 2021

Abstract

The media has increasingly grown to rely on automated decision-making to produce and distribute news. This trend challenges our understanding of editorial independence by transforming the role of human editorial judgment and creating new dependencies on external software and data providers, engineers, and platforms. Recent policy initiatives such as the EU’s Media Action Plan and Digital Services Act are now beginning to revisit the way law can enable the media to act independently in the context of new technological tools and actors. Fully understanding and addressing the challenges automation poses to editorial independence, however, first requires better normative insight into the functions editorial independence performs in European media policy. This article provides a normative framework of editorial independence’s functions in European media policy and uses it to explore the new challenges posed by the automation of editorial decision-making.

automated decision making, frontpage, Mediarecht, onafhankelijkheid

Bibtex

Article{nokey, title = {Editorial independence in an automated media system}, author = {Drunen, M. van}, url = {https://policyreview.info/articles/analysis/editorial-independence-automated-media-system}, doi = {https://doi.org/10.14763/2021.3.1569}, year = {0913}, date = {2021-09-13}, journal = {Internet Policy Review}, volume = {10}, number = {3}, pages = {}, abstract = {The media has increasingly grown to rely on automated decision-making to produce and distribute news. This trend challenges our understanding of editorial independence by transforming the role of human editorial judgment and creating new dependencies on external software and data providers, engineers, and platforms. Recent policy initiatives such as the EU’s Media Action Plan and Digital Services Act are now beginning to revisit the way law can enable the media to act independently in the context of new technological tools and actors. Fully understanding and addressing the challenges automation poses to editorial independence, however, first requires better normative insight into the functions editorial independence performs in European media policy. This article provides a normative framework of editorial independence’s functions in European media policy and uses it to explore the new challenges posed by the automation of editorial decision-making.}, keywords = {automated decision making, frontpage, Mediarecht, onafhankelijkheid}, }

Centering the Law in the Digital State external link

Cobbe, J., Seng Ah Lee, M., Singh, J. & Janssen, H.
Computer, vol. 53, num: 10, pp: 47-58, 2020

Abstract

Driven by the promise of increased efficiencies and cost-savings, the public sector has shown much interest in automated decision-making (ADM) technologies. However, the rule of law and fundamental principles of good government are being lost along the way.

automated decision making

Bibtex

Article{Cobbe2020, title = {Centering the Law in the Digital State}, author = {Cobbe, J. and Seng Ah Lee, M. and Singh, J. and Janssen, H.}, doi = {https://doi.org/10.1109/MC.2020.3006623}, year = {0925}, date = {2020-09-25}, journal = {Computer}, volume = {53}, number = {10}, pages = {47-58}, abstract = {Driven by the promise of increased efficiencies and cost-savings, the public sector has shown much interest in automated decision-making (ADM) technologies. However, the rule of law and fundamental principles of good government are being lost along the way.}, keywords = {automated decision making}, }

An approach to a fundamental rights impact assessment to automated decision-making external link

International Data Privacy Law, vol. 10, num: 1, pp: 76-106, 2020

Abstract

Companies and other private institutions see great and promising profits in the use of automated decision-making (‘ADM’) for commercial-, financial- or efficiency in work processing purposes. Meanwhile, ADM based on a data subjects’ personal data may (severely) impact its fundamental rights and freedoms. The General Data Protection Regulation (GDPR) provides for a regulatory framework that applies whenever a controller considers and deploys ADM onto individuals on the basis of their personal data. In the design stage of the intended ADM, article 35 (3)(a) obliges a controller to apply a Data Protection Impact Assessment (DPIA), part of which is an assessment of ADM’s impact on individual rights and freedoms. Article 22 GDPR determines under what conditions ADM is allowed and endows data subjects with increased protection. Research among companies of various sizes has shown that there is (legal) insecurity about the interpretation of the GDPR (including the provisions relevant to ADM). The first objective of the author is to detect ways forward by offering practical handles to execute a DPIA that includes a slidable assessment of impacts on data subjects’ fundamental rights. This assessment is based on four benchmarks that should help to assess the gravity of potential impacts, i.e. i) to determine the impact on the fundamental right(s) at stake, ii) to establish the context in which the ADM is used, iii) the establishment of who is beneficiary of the use of personal data in the ADM and iv) the establishment who is in control over the data flows in the ADM. From the benchmarks an overall fundamental rights impact assessment about ADM should arise. A second objective is to indicate potential factors and measures that a controller should consider in its risk management after the assessment. The proposed approach should help fostering fair, compliant and trustworthy ADM and contains directions for future research.

automated decision making, Fundamental rights, horizontal relations, impact assessment

Bibtex

Article{Janssen2020, title = {An approach to a fundamental rights impact assessment to automated decision-making}, author = {Janssen, H.}, doi = {https://doi.org/https://doi.org/10.1093/idpl/ipz028}, year = {0306}, date = {2020-03-06}, journal = {International Data Privacy Law}, volume = {10}, number = {1}, pages = {76-106}, abstract = {Companies and other private institutions see great and promising profits in the use of automated decision-making (‘ADM’) for commercial-, financial- or efficiency in work processing purposes. Meanwhile, ADM based on a data subjects’ personal data may (severely) impact its fundamental rights and freedoms. The General Data Protection Regulation (GDPR) provides for a regulatory framework that applies whenever a controller considers and deploys ADM onto individuals on the basis of their personal data. In the design stage of the intended ADM, article 35 (3)(a) obliges a controller to apply a Data Protection Impact Assessment (DPIA), part of which is an assessment of ADM’s impact on individual rights and freedoms. Article 22 GDPR determines under what conditions ADM is allowed and endows data subjects with increased protection. Research among companies of various sizes has shown that there is (legal) insecurity about the interpretation of the GDPR (including the provisions relevant to ADM). The first objective of the author is to detect ways forward by offering practical handles to execute a DPIA that includes a slidable assessment of impacts on data subjects’ fundamental rights. This assessment is based on four benchmarks that should help to assess the gravity of potential impacts, i.e. i) to determine the impact on the fundamental right(s) at stake, ii) to establish the context in which the ADM is used, iii) the establishment of who is beneficiary of the use of personal data in the ADM and iv) the establishment who is in control over the data flows in the ADM. From the benchmarks an overall fundamental rights impact assessment about ADM should arise. A second objective is to indicate potential factors and measures that a controller should consider in its risk management after the assessment. The proposed approach should help fostering fair, compliant and trustworthy ADM and contains directions for future research.}, keywords = {automated decision making, Fundamental rights, horizontal relations, impact assessment}, }

Netherlands/Research external link

1029, pp: 164-175

Abstract

How are AI-based systems being used by private companies and public authorities in Europe? The new report by AlgorithmWatch and Bertelsmann Stiftung sheds light on what role automated decision-making (ADM) systems play in our lives. As a result of the most comprehensive research on the issue conducted in Europe so far, the report covers the current use of and policy debates around ADM systems in 16 European countries and at EU level.

ai, automated decision making, frontpage, Technologie en recht

Bibtex

Chapter{Fahy2020b, title = {Netherlands/Research}, author = {Fahy, R. and Appelman, N.}, url = {https://www.ivir.nl/publicaties/download/Automating-Society-Report-2020.pdf https://automatingsociety.algorithmwatch.org/}, year = {1029}, date = {2020-10-29}, abstract = {How are AI-based systems being used by private companies and public authorities in Europe? The new report by AlgorithmWatch and Bertelsmann Stiftung sheds light on what role automated decision-making (ADM) systems play in our lives. As a result of the most comprehensive research on the issue conducted in Europe so far, the report covers the current use of and policy debates around ADM systems in 16 European countries and at EU level.}, keywords = {ai, automated decision making, frontpage, Technologie en recht}, }

Who is the fairest of them all? Public attitudes and expectations regarding automated decision-making external link

Helberger, N., Araujo, T. & Vreese, C.H. de
Computer Law & Security Review, vol. 39, 2020

Abstract

The ongoing substitution of human decision makers by automated decision-making (ADM) systems in a whole range of areas raises the question of whether and, if so, under which conditions ADM is acceptable and fair. So far, this debate has been primarily led by academics, civil society, technology developers and members of the expert groups tasked to develop ethical guidelines for ADM. Ultimately, however, ADM affects citizens, who will live with, act upon and ultimately have to accept the authority of ADM systems. The paper aims to contribute to this larger debate by providing deeper insights into the question of whether, and if so, why and under which conditions, citizens are inclined to accept ADM as fair. The results of a survey (N = 958) with a representative sample of the Dutch adult population, show that most respondents assume that AI-driven ADM systems are fairer than human decision-makers. A more nuanced view emerges from an analysis of the responses, with emotions, expectations about AI being data- and calculation-driven, as well as the role of the programmer – among other dimensions – being cited as reasons for (un)fairness by AI or humans. Individual characteristics such as age and education level influenced not only perceptions about AI fairness, but also the reasons provided for such perceptions. The paper concludes with a normative assessment of the findings and suggestions for the future debate and research.

Artificial intelligence, automated decision making, fairness, frontpage, Technologie en recht

Bibtex

Article{Helberger2020f, title = {Who is the fairest of them all? Public attitudes and expectations regarding automated decision-making}, author = {Helberger, N. and Araujo, T. and Vreese, C.H. de}, url = {https://www.sciencedirect.com/science/article/pii/S0267364920300613?dgcid=author}, doi = {https://doi.org/https://doi.org/10.1016/j.clsr.2020.105456}, year = {0915}, date = {2020-09-15}, journal = {Computer Law & Security Review}, volume = {39}, pages = {}, abstract = {The ongoing substitution of human decision makers by automated decision-making (ADM) systems in a whole range of areas raises the question of whether and, if so, under which conditions ADM is acceptable and fair. So far, this debate has been primarily led by academics, civil society, technology developers and members of the expert groups tasked to develop ethical guidelines for ADM. Ultimately, however, ADM affects citizens, who will live with, act upon and ultimately have to accept the authority of ADM systems. The paper aims to contribute to this larger debate by providing deeper insights into the question of whether, and if so, why and under which conditions, citizens are inclined to accept ADM as fair. The results of a survey (N = 958) with a representative sample of the Dutch adult population, show that most respondents assume that AI-driven ADM systems are fairer than human decision-makers. A more nuanced view emerges from an analysis of the responses, with emotions, expectations about AI being data- and calculation-driven, as well as the role of the programmer – among other dimensions – being cited as reasons for (un)fairness by AI or humans. Individual characteristics such as age and education level influenced not only perceptions about AI fairness, but also the reasons provided for such perceptions. The paper concludes with a normative assessment of the findings and suggestions for the future debate and research.}, keywords = {Artificial intelligence, automated decision making, fairness, frontpage, Technologie en recht}, }

The Golden Age of Personal Data: How to Regulate an Enabling Fundamental Right? external link

Oostveen, M. & Irion, K.
In: Bakhoum M., Conde Gallego B., Mackenrodt MO., Surblytė-Namavičienė G. (eds) Personal Data in Competition, Consumer Protection and Intellectual Property Law. MPI Studies on Intellectual Property and Competition Law, vol 28. Springer, Berlin, Heidelberg, 1120

Abstract

New technologies, purposes and applications to process individuals’ personal data are being developed on a massive scale. But we have not only entered the ‘golden age of personal data’ in terms of its exploitation: ours is also the ‘golden age of personal data’ in terms of regulation of its use. Understood as an enabling right, the architecture of EU data protection law is capable of protecting against many of the negative short- and long-term effects of contemporary data processing. Against the backdrop of big data applications, we evaluate how the implementation of privacy and data protection rules protect against the short- and long-term effects of contemporary data processing. We conclude that from the perspective of protecting individual fundamental rights and freedoms, it would be worthwhile to explore alternative (legal) approaches instead of relying on EU data protection law alone to cope with contemporary data processing.

automated decision making, Big data, Data protection, frontpage, General Data Protection Regulation, Privacy, profiling

Bibtex

Chapter{Oostveen2018, title = {The Golden Age of Personal Data: How to Regulate an Enabling Fundamental Right?}, author = {Oostveen, M. and Irion, K.}, url = {https://link.springer.com/chapter/10.1007/978-3-662-57646-5_2}, year = {1120}, date = {2018-11-20}, abstract = {New technologies, purposes and applications to process individuals’ personal data are being developed on a massive scale. But we have not only entered the ‘golden age of personal data’ in terms of its exploitation: ours is also the ‘golden age of personal data’ in terms of regulation of its use. Understood as an enabling right, the architecture of EU data protection law is capable of protecting against many of the negative short- and long-term effects of contemporary data processing. Against the backdrop of big data applications, we evaluate how the implementation of privacy and data protection rules protect against the short- and long-term effects of contemporary data processing. We conclude that from the perspective of protecting individual fundamental rights and freedoms, it would be worthwhile to explore alternative (legal) approaches instead of relying on EU data protection law alone to cope with contemporary data processing.}, keywords = {automated decision making, Big data, Data protection, frontpage, General Data Protection Regulation, Privacy, profiling}, }

Automated Decision-Making Fairness in an AI-driven World: Public Perceptions, Hopes and Concerns external link

Araujo, T., Vreese, C.H. de, Helberger, N., Kruikemeier, S., Weert, J. van,, Bol, N., Oberski, D., Pechenizkiy, M., Schaap, G. & Taylor, L.
2018

Abstract

Ongoing advances in artificial intelligence (AI) are increasingly part of scientific efforts as well as the public debate and the media agenda, raising hopes and concerns about the impact of automated decision making across different sectors of our society. This topic is receiving increasing attention at both national and cross- national levels. The present report contributes to informing this public debate, providing the results of a survey with 958 participants recruited from high-quality sample of the Dutch population. It provides an overview of public knowledge, perceptions, hopes and concerns about the adoption of AI and ADM across different societal sectors in the Netherlands. This report is part of a research collaboration between the Universities of Amsterdam, Tilburg, Radboud, Utrecht and Eindhoven (TU/e) on automated decision making, and forms input to the groups’ research on fairness in automated decision making.

ai, algoritmes, Artificial intelligence, automated decision making, frontpage

Bibtex

Report{Araujo2018, title = {Automated Decision-Making Fairness in an AI-driven World: Public Perceptions, Hopes and Concerns}, author = {Araujo, T. and Vreese, C.H. de and Helberger, N. and Kruikemeier, S. and Weert, J. van, and Bol, N. and Oberski, D. and Pechenizkiy, M. and Schaap, G. and Taylor, L.}, url = {https://www.ivir.nl/publicaties/download/Automated_Decision_Making_Fairness.pdf}, year = {1005}, date = {2018-10-05}, abstract = {Ongoing advances in artificial intelligence (AI) are increasingly part of scientific efforts as well as the public debate and the media agenda, raising hopes and concerns about the impact of automated decision making across different sectors of our society. This topic is receiving increasing attention at both national and cross- national levels. The present report contributes to informing this public debate, providing the results of a survey with 958 participants recruited from high-quality sample of the Dutch population. It provides an overview of public knowledge, perceptions, hopes and concerns about the adoption of AI and ADM across different societal sectors in the Netherlands. This report is part of a research collaboration between the Universities of Amsterdam, Tilburg, Radboud, Utrecht and Eindhoven (TU/e) on automated decision making, and forms input to the groups’ research on fairness in automated decision making.}, keywords = {ai, algoritmes, Artificial intelligence, automated decision making, frontpage}, }