Trust in the digital Society – New UvA Research Priority Area
IViR and the Law School is leading a new, university-wide, interdisciplinary Research Priority Area collaboration on the topic of Trust in the digital Society.
The digital society relies on an ever-expanding network of digital infrastructures. This forces us to rethink some of our basic assumptions about one of the most fundamental resources in all interpersonal, social, economic relations: trust.
- Despite their widespread use, we know little about the trustworthiness of our digital infrastructures. At best it is difficult to establish their trustworthiness (such as with AI), at worst they are proven to be untrustworthy (as is the case with social media).
- Some of these digital infrastructures play key roles in interpersonal and societal trust dynamics. They disrupt existing trust relations and offer new ways to trust each other.
- Our scientific methods and theories face serious limits when it comes to the study of change of social structures, institutions, and processes under the conditions of rapid technological transformation. Research on trust in technology and trust by technology is siloed, and focuses on narrowly defined technologies (AI), or problems (system security), and lacks a comprehensive, multidisciplinary, long-term-view.
Trust is a fundamental building block of our society. But trust, both interpersonal, and institutional, is constantly being disrupted – willfully, or inadvertently – by technological innovation. While we rely more and more on digital technologies – among other to trust each other -, multiple surveys point to the growing distrust in them, and the companies which provide them. Our trust relationships in the digital society seem to be in crisis, and we don’t have reassuring ways to deal with the destabilization of such a crucial social, political, and economic resource. In the Trust RPA, we aim to assess the emerging distrust within our societies and develop interventions that reestablish trust and trustworthiness at an interpersonal and institutional level.
Take for example the current pandemic, where we enlisted various novel technologies: from mRNS vaccines, via contact tracing apps and digital vaccination passports, to remote teaching and work technologies. Each and every of these technologies raised fundamental trust questions: Can the novel, untested vaccine technology be trusted? Can scientist, experts, elected government officials, politicians who developed, tested, mandated vaccines be trusted? How can citizens trust information sources, how can one recognize misinformation? What happens to interpersonal trust relations when an increasing share of our social interactions at the workplace, in schools, among family members are facilitated and mediated by digital technologies? What is the role of digital platforms in the shifting dynamics of trust in society? How does the increasing use of digital technologies by the government affect the trust relationship between the state and the citizen? How can we ensure the trustworthiness of our technological infrastructures? Can we prevent the breakdown of society without a shared, common core of knowledge, trust in institutions, trust in each other, and in the technological infrastructures supporting our society?
The pandemic only highlighted and emphasized a more general problem around trust in the digital society. Various digital technologies transformed societal trust relations at unprecedented scale. Social media, for example, have contributed to global communities and allow for easy access to various ways of communicating amongst people, but it is also used by hostile actors to sow mistrust and discord in those communities. Incidents such as data leakage gives rise to the question if we can trust the state to be a good custodian of our test-, trace-, and vaccination data. The increased number of cyberattacks make people wonder if their personal data, or valuable and/or sensitive business data can safely be exchanged and whether that data will be processed respecting that person’s interests.
New technologies, from nuclear energy to online marketplaces have presented new, often unknown forms of risks to the individual and to the society at large. Trust is a key resource in maintaining interpersonal, social, economic, political relations in face of such risks. But trust, if we want to ensure it’s not misplaced, requires independently verifiable trustworthiness of techno-social systems.
Trust, in the context of digitization faces a double challenge. First, digital infrastructures need better, most concrete, more complex trustworthiness safeguards and guarantees than what they have now. The current piecemeal approach, where computer scientists, competition lawyers, economists, UX designers, and business evangelists try to address the serious trustworthiness problems of digital technologies so far couldn’t prevent and avert mass data leaks, biased algorithmic decisions, social, political, economic destabilization, or even genocide.
Second, we nevertheless use these fundamentally untrustworthy infrastructures to produce trust in social relations. Blockchains, and smart contracts, automated filtering and recommender systems, online reputation management platforms are often explicitly designed to disrupt and replace time-tested, locally embedded trust producing social, cultural practices, and public institutions. The interpersonal and institutional frameworks which so far helped us to trust each other are being replaced by opaque, unaccountable, profit-driven, heavily automated, and lightly regulated foreign private entities with their own economic, political, social, and cultural agendas. And as the underlying trust producing infrastructures change, so does the nature of interpersonal and societal trust they enable.
The goal of the RPA is to address the crisis of trust in the digital society by (1) fostering new, cross-cutting, fundamental, multidisciplinary research on digitization and trust; (2) coordinate existing UvA research taking place in FdR, FNWI, FEB, FGW, and FMG. Using the RPA funding as seed money to generate substantial 2nd and 3rd stream funding, and by closely working with public and private stakeholders on current, real-world problems, it aims to position UvA at the forefront of global digital trust research. In short, this proposal aims to achieve an innovative, self-sustaining, interdisciplinary research and policy cluster with a global reach, on the topic of trust in digital society, by covering individual, institutional, legal, and technical aspects of trust, distrust, and trustworthiness in the context of digital technologies. Its mission, ultimately, is to contribute to a more trusting society by actively transferring its findings on trust and technology, to a wide group of societal stakeholders: citizens, firms, government bodies.