Who is the fairest of them all? Public attitudes and expectations regarding automated decision-making

Abstract

The ongoing substitution of human decision makers by automated decision-making (ADM) systems in a whole range of areas raises the question of whether and, if so, under which conditions ADM is acceptable and fair. So far, this debate has been primarily led by academics, civil society, technology developers and members of the expert groups tasked to develop ethical guidelines for ADM. Ultimately, however, ADM affects citizens, who will live with, act upon and ultimately have to accept the authority of ADM systems. The paper aims to contribute to this larger debate by providing deeper insights into the question of whether, and if so, why and under which conditions, citizens are inclined to accept ADM as fair. The results of a survey (N = 958) with a representative sample of the Dutch adult population, show that most respondents assume that AI-driven ADM systems are fairer than human decision-makers. A more nuanced view emerges from an analysis of the responses, with emotions, expectations about AI being data- and calculation-driven, as well as the role of the programmer – among other dimensions – being cited as reasons for (un)fairness by AI or humans. Individual characteristics such as age and education level influenced not only perceptions about AI fairness, but also the reasons provided for such perceptions. The paper concludes with a normative assessment of the findings and suggestions for the future debate and research.

Artificial intelligence, automated decision making, fairness, frontpage, Technologie en recht

Bibtex

Article{Helberger2020f, title = {Who is the fairest of them all? Public attitudes and expectations regarding automated decision-making}, author = {Helberger, N. and Araujo, T. and Vreese, C.H. de}, url = {https://www.sciencedirect.com/science/article/pii/S0267364920300613?dgcid=author}, doi = {https://doi.org/https://doi.org/10.1016/j.clsr.2020.105456}, year = {0915}, date = {2020-09-15}, journal = {Computer Law & Security Review}, volume = {39}, pages = {}, abstract = {The ongoing substitution of human decision makers by automated decision-making (ADM) systems in a whole range of areas raises the question of whether and, if so, under which conditions ADM is acceptable and fair. So far, this debate has been primarily led by academics, civil society, technology developers and members of the expert groups tasked to develop ethical guidelines for ADM. Ultimately, however, ADM affects citizens, who will live with, act upon and ultimately have to accept the authority of ADM systems. The paper aims to contribute to this larger debate by providing deeper insights into the question of whether, and if so, why and under which conditions, citizens are inclined to accept ADM as fair. The results of a survey (N = 958) with a representative sample of the Dutch adult population, show that most respondents assume that AI-driven ADM systems are fairer than human decision-makers. A more nuanced view emerges from an analysis of the responses, with emotions, expectations about AI being data- and calculation-driven, as well as the role of the programmer – among other dimensions – being cited as reasons for (un)fairness by AI or humans. Individual characteristics such as age and education level influenced not only perceptions about AI fairness, but also the reasons provided for such perceptions. The paper concludes with a normative assessment of the findings and suggestions for the future debate and research.}, keywords = {Artificial intelligence, automated decision making, fairness, frontpage, Technologie en recht}, }