The commodification of trust external link

Blockchain & Society Policy Research Lab Research Nodes, num: 1, 2021

Abstract

Fundamental, wide-ranging, and highly consequential transformations take place in interpersonal, and systemic trust relations due to the rapid adoption of complex, planetary-scale digital technological innovations. Trust is remediated by planetary scale techno-social systems, which leads to the privatization of trust production in society, and the ultimate commodification of trust itself. Modern societies rely on communal, public and private logics of trust production. Communal logics produce trust by the group for the group, and are based on familiar, ethnic, religious or tribal relations, professional associations epistemic or value communities, groups with shared location or shared past. Public trust logics developed in the context of the modern state, and produce trust as a free public service. Abstract, institutionalized frameworks, institutions, such as the press, or public education, science, various arms of the bureaucratic state create familiarity, control, and insurance in social, political, and economic relations. Finally, private trust producers sell confidence as a product: lawyers, accountants, credit rating agencies, insurers, but also commercial brands offer trust for a fee. With the emergence of the internet and digitization, a new class of private trust producers emerged. Online reputation management services, distributed ledgers, and AI-based predictive systems are widely adopted technological infrastructures, which are designed to facilitate trust-necessitating social, economic interactions by controlling the past, the present and the future, respectively. These systems enjoy immense economic success, and they are adopted en masse by individuals and institutional actors alike. The emergence of the private, technical means of trust production paves the way towards the widescale commodification of trust, where trust is produced as a commercial activity, conducted by private parties, for economic gain, often far removed from the loci where trust-necessitating social interactions take place. The remediation and consequent privatization and commodification of trust production has a number of potentially adverse social effects: it may decontextualize trust relationships; it removes trust from the local social, cultural relational contexts; it changes the calculus of interpersonal trust relations. Maybe more importantly as more and more social and economic relations are conditional upon having access to, and good standing in private trust infrastructures, commodification turns trust into the question of continuous labor, or devastating exclusion. By invoking Karl Polanyi’s work on fictious commodities, I argue that the privatization, and commodification of trust may have a catastrophic impact on the most fundamental layers of the social fabric.

ai, blockchains, commodification, frontpage, Informatierecht, Karl Polanyi, reputation, trust, trust production

Bibtex

Article{Bodó2021, title = {The commodification of trust}, author = {Bodó, B.}, url = {https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3843707}, year = {0517}, date = {2021-05-17}, journal = {Blockchain & Society Policy Research Lab Research Nodes}, number = {1}, abstract = {Fundamental, wide-ranging, and highly consequential transformations take place in interpersonal, and systemic trust relations due to the rapid adoption of complex, planetary-scale digital technological innovations. Trust is remediated by planetary scale techno-social systems, which leads to the privatization of trust production in society, and the ultimate commodification of trust itself. Modern societies rely on communal, public and private logics of trust production. Communal logics produce trust by the group for the group, and are based on familiar, ethnic, religious or tribal relations, professional associations epistemic or value communities, groups with shared location or shared past. Public trust logics developed in the context of the modern state, and produce trust as a free public service. Abstract, institutionalized frameworks, institutions, such as the press, or public education, science, various arms of the bureaucratic state create familiarity, control, and insurance in social, political, and economic relations. Finally, private trust producers sell confidence as a product: lawyers, accountants, credit rating agencies, insurers, but also commercial brands offer trust for a fee. With the emergence of the internet and digitization, a new class of private trust producers emerged. Online reputation management services, distributed ledgers, and AI-based predictive systems are widely adopted technological infrastructures, which are designed to facilitate trust-necessitating social, economic interactions by controlling the past, the present and the future, respectively. These systems enjoy immense economic success, and they are adopted en masse by individuals and institutional actors alike. The emergence of the private, technical means of trust production paves the way towards the widescale commodification of trust, where trust is produced as a commercial activity, conducted by private parties, for economic gain, often far removed from the loci where trust-necessitating social interactions take place. The remediation and consequent privatization and commodification of trust production has a number of potentially adverse social effects: it may decontextualize trust relationships; it removes trust from the local social, cultural relational contexts; it changes the calculus of interpersonal trust relations. Maybe more importantly as more and more social and economic relations are conditional upon having access to, and good standing in private trust infrastructures, commodification turns trust into the question of continuous labor, or devastating exclusion. By invoking Karl Polanyi’s work on fictious commodities, I argue that the privatization, and commodification of trust may have a catastrophic impact on the most fundamental layers of the social fabric.}, keywords = {ai, blockchains, commodification, frontpage, Informatierecht, Karl Polanyi, reputation, trust, trust production}, }

News Recommenders and Cooperative Explainability: Confronting the contextual complexity in AI explanations external link

ai, frontpage, news recommenders, Technologie en recht

Bibtex

Report{Drunen2020b, title = {News Recommenders and Cooperative Explainability: Confronting the contextual complexity in AI explanations}, author = {Drunen, M. van and Ausloos, J. and Appelman, N. and Helberger, N.}, url = {https://www.ivir.nl/publicaties/download/Visiepaper-explainable-AI-final.pdf}, year = {1103}, date = {2020-11-03}, keywords = {ai, frontpage, news recommenders, Technologie en recht}, }

Netherlands/Research external link

1029, pp: 164-175

Abstract

How are AI-based systems being used by private companies and public authorities in Europe? The new report by AlgorithmWatch and Bertelsmann Stiftung sheds light on what role automated decision-making (ADM) systems play in our lives. As a result of the most comprehensive research on the issue conducted in Europe so far, the report covers the current use of and policy debates around ADM systems in 16 European countries and at EU level.

ai, automated decision making, frontpage, Technologie en recht

Bibtex

Chapter{Fahy2020b, title = {Netherlands/Research}, author = {Fahy, R. and Appelman, N.}, url = {https://www.ivir.nl/publicaties/download/Automating-Society-Report-2020.pdf https://automatingsociety.algorithmwatch.org/}, year = {1029}, date = {2020-10-29}, abstract = {How are AI-based systems being used by private companies and public authorities in Europe? The new report by AlgorithmWatch and Bertelsmann Stiftung sheds light on what role automated decision-making (ADM) systems play in our lives. As a result of the most comprehensive research on the issue conducted in Europe so far, the report covers the current use of and policy debates around ADM systems in 16 European countries and at EU level.}, keywords = {ai, automated decision making, frontpage, Technologie en recht}, }

Discrimination, artificial intelligence, and algorithmic decision-making external link

vol. 2019, 2019

Abstract

This report, written for the Anti-discrimination department of the Council of Europe, concerns discrimination caused by algorithmic decision-making and other types of artificial intelligence (AI). AI advances important goals, such as efficiency, health and economic growth but it can also have discriminatory effects, for instance when AI systems learn from biased human decisions. In the public and the private sector, organisations can take AI-driven decisions with farreaching effects for people. Public sector bodies can use AI for predictive policing for example, or for making decisions on eligibility for pension payments, housing assistance or unemployment benefits. In the private sector, AI can be used to select job applicants, and banks can use AI to decide whether to grant individual consumers credit and set interest rates for them. Moreover, many small decisions, taken together, can have large effects. By way of illustration, AI-driven price discrimination could lead to certain groups in society consistently paying more. The most relevant legal tools to mitigate the risks of AI-driven discrimination are nondiscrimination law and data protection law. If effectively enforced, both these legal tools could help to fight illegal discrimination. Council of Europe member States, human rights monitoring bodies, such as the European Commission against Racism and Intolerance, and Equality Bodies should aim for better enforcement of current nondiscrimination norms. But AI also opens the way for new types of unfair differentiation (some might say discrimination) that escape current laws. Most non-discrimination statutes apply only to discrimination on the basis of protected characteristics, such as skin colour. Such statutes do not apply if an AI system invents new classes, which do not correlate with protected characteristics, to differentiate between people. Such differentiation could still be unfair, however, for instance when it reinforces social inequality. We probably need additional regulation to protect fairness and human rights in the area of AI. But regulating AI in general is not the right approach, as the use of AI systems is too varied for one set of rules. In different sectors, different values are at stake, and different problems arise. Therefore, sector-specific rules should be considered. More research and debate are needed.

ai, discriminatie, frontpage, kunstmatige intelligentie, Mensenrechten

Bibtex

Report{Borgesius2019, title = {Discrimination, artificial intelligence, and algorithmic decision-making}, author = {Zuiderveen Borgesius, F.}, url = {https://rm.coe.int/discrimination-artificial-intelligence-and-algorithmic-decision-making/1680925d73}, year = {0208}, date = {2019-02-08}, volume = {2019}, pages = {}, abstract = {This report, written for the Anti-discrimination department of the Council of Europe, concerns discrimination caused by algorithmic decision-making and other types of artificial intelligence (AI). AI advances important goals, such as efficiency, health and economic growth but it can also have discriminatory effects, for instance when AI systems learn from biased human decisions. In the public and the private sector, organisations can take AI-driven decisions with farreaching effects for people. Public sector bodies can use AI for predictive policing for example, or for making decisions on eligibility for pension payments, housing assistance or unemployment benefits. In the private sector, AI can be used to select job applicants, and banks can use AI to decide whether to grant individual consumers credit and set interest rates for them. Moreover, many small decisions, taken together, can have large effects. By way of illustration, AI-driven price discrimination could lead to certain groups in society consistently paying more. The most relevant legal tools to mitigate the risks of AI-driven discrimination are nondiscrimination law and data protection law. If effectively enforced, both these legal tools could help to fight illegal discrimination. Council of Europe member States, human rights monitoring bodies, such as the European Commission against Racism and Intolerance, and Equality Bodies should aim for better enforcement of current nondiscrimination norms. But AI also opens the way for new types of unfair differentiation (some might say discrimination) that escape current laws. Most non-discrimination statutes apply only to discrimination on the basis of protected characteristics, such as skin colour. Such statutes do not apply if an AI system invents new classes, which do not correlate with protected characteristics, to differentiate between people. Such differentiation could still be unfair, however, for instance when it reinforces social inequality. We probably need additional regulation to protect fairness and human rights in the area of AI. But regulating AI in general is not the right approach, as the use of AI systems is too varied for one set of rules. In different sectors, different values are at stake, and different problems arise. Therefore, sector-specific rules should be considered. More research and debate are needed.}, keywords = {ai, discriminatie, frontpage, kunstmatige intelligentie, Mensenrechten}, }

Automated Decision-Making Fairness in an AI-driven World: Public Perceptions, Hopes and Concerns external link

Araujo, T., Vreese, C.H. de, Helberger, N., Kruikemeier, S., Weert, J. van,, Bol, N., Oberski, D., Pechenizkiy, M., Schaap, G. & Taylor, L.
2018

Abstract

Ongoing advances in artificial intelligence (AI) are increasingly part of scientific efforts as well as the public debate and the media agenda, raising hopes and concerns about the impact of automated decision making across different sectors of our society. This topic is receiving increasing attention at both national and cross- national levels. The present report contributes to informing this public debate, providing the results of a survey with 958 participants recruited from high-quality sample of the Dutch population. It provides an overview of public knowledge, perceptions, hopes and concerns about the adoption of AI and ADM across different societal sectors in the Netherlands. This report is part of a research collaboration between the Universities of Amsterdam, Tilburg, Radboud, Utrecht and Eindhoven (TU/e) on automated decision making, and forms input to the groups’ research on fairness in automated decision making.

ai, algoritmes, Artificial intelligence, automated decision making, frontpage

Bibtex

Report{Araujo2018, title = {Automated Decision-Making Fairness in an AI-driven World: Public Perceptions, Hopes and Concerns}, author = {Araujo, T. and Vreese, C.H. de and Helberger, N. and Kruikemeier, S. and Weert, J. van, and Bol, N. and Oberski, D. and Pechenizkiy, M. and Schaap, G. and Taylor, L.}, url = {https://www.ivir.nl/publicaties/download/Automated_Decision_Making_Fairness.pdf}, year = {1005}, date = {2018-10-05}, abstract = {Ongoing advances in artificial intelligence (AI) are increasingly part of scientific efforts as well as the public debate and the media agenda, raising hopes and concerns about the impact of automated decision making across different sectors of our society. This topic is receiving increasing attention at both national and cross- national levels. The present report contributes to informing this public debate, providing the results of a survey with 958 participants recruited from high-quality sample of the Dutch population. It provides an overview of public knowledge, perceptions, hopes and concerns about the adoption of AI and ADM across different societal sectors in the Netherlands. This report is part of a research collaboration between the Universities of Amsterdam, Tilburg, Radboud, Utrecht and Eindhoven (TU/e) on automated decision making, and forms input to the groups’ research on fairness in automated decision making.}, keywords = {ai, algoritmes, Artificial intelligence, automated decision making, frontpage}, }

Before the Singularity: Copyright and the Challenges of Artificial Intelligence external link

González Otero, B., & Quintais, J.
Kluwer Copyright Blog, vol. 2018, 2018

ai, Copyright, frontpage

Bibtex

Article{Otero2018, title = {Before the Singularity: Copyright and the Challenges of Artificial Intelligence}, author = {González Otero, B., and Quintais, J.}, url = {http://copyrightblog.kluweriplaw.com/2018/09/25/singularity-copyright-challenges-artificial-intelligence/}, year = {0927}, date = {2018-09-27}, journal = {Kluwer Copyright Blog}, volume = {2018}, pages = {}, keywords = {ai, Copyright, frontpage}, }