Algorithmic agents permeate every instant of our online existence. Based on our digital profiles built from the massive surveillance of our digital existence, algorithmic agents rank search results, filter our emails, hide and show news items on social networks feeds, try to guess what products we might buy next for ourselves and for others, what movies we want to watch, and when we might be pregnant. Algorithmic agents select, filter, and recommend products, information, and people; they increasingly customize our physical environments, including the temperature and the mood. Increasingly, algorithmic agents don’t just select from the range of human created alternatives, but also they create. Burgeoning algorithmic agents are capable of providing us with content made just for us, and engage with us through one-of-a-kind, personalized interactions. Studying these algorithmic agents presents a host of methodological, ethical, and logistical challenges. The objectives of our paper are two-fold. The first aim is to describe one possible approach to researching the individual and societal effects of algorithmic recommenders, and to share our experiences with the academic community. The second is to contribute to a more fundamental discussion about the ethical and legal issues of “tracking the trackers”, as well as the costs and trade-offs involved. Our paper will contribute to the discussion on the relative merits, costs and benefits of different approaches to ethically and legally sound research on algorithmic governance. We will argue that besides shedding light on how users interact with algorithmic agents, we also need to be able to understand how different methods of monitoring our algorithmically controlled digital environments compare to each other in terms of costs and benefits. We conclude our article with a number of concrete suggestions for how to address the practical, ethical and legal challenges of researching algorithms and their effects on users and society.
Beleidsmakers, wetenschappers en anderen vrezen dat gepersonaliseerd nieuws kan leiden tot filter bubbles, unieke informatieruimtes voor iedereen. Filter bubbles zouden een gevaar vormen voor onze democratie. Op basis van de politieke voorkeuren van een gebruiker kan een gepersonaliseerde nieuwssite bepaalde onderwerpen of meningen bijvoorbeeld een meer of minder prominente plek geven. Er wordt gedacht dat personalisatie tot een nieuwe vorm van verzuiling kan leiden, waarbij gebruikers van online gepersonaliseerd nieuws weinig verschillende politieke ideeën tegenkomen. In deze bijdrage bespreken we empirisch onderzoek naar de omvang en effecten van personalisatie. Hierbij onderscheiden we zelfgeselecteerde personalisatie, waarbij mensen expliciet aangeven over welke onderwerpen zij informatie willen ontvangen, en vooraf geselecteerde personalisatie, waarbij algoritmes bepalen over welke onderwerpen gebruikers informatie ontvangen. We concluderen dat er tot nu toe weinig empirisch bewijs is dat de zorgen over filter bubbles rechtvaardigt.
Some fear that personalised communication can lead to information cocoons or filter bubbles. For instance, a personalised news website could give more prominence to conservative or liberal media items, based on the (assumed) political interests of the user. As a result, users may encounter only a limited range of political ideas. We synthesise empirical research on the extent and effects of self-selected personalisation, where people actively choose which content they receive, and pre-selected personalisation, where algorithms personalise content for users without any deliberate user choice. We conclude that at present there is little empirical evidence that warrants any worries about filter bubbles.
14 July 2015.
This paper explores the social, demographic and attitudinal basis of consumer support to a change from the status quo in digital cultural distribution. First we identify how different online and offline, legal and illegal, free and paying content acquisition channels are used in the Dutch media market using a cluster-based classification of respondents according to their cultural consumption. Second, we assess the effect of cultural consumption on the support to the introduction of a Copyright Compensation System (CCS), which, for a small monthly fee would legalize currently infringing online social practices such as private copying from illegal sources and online sharing of copyrighted works. Finally, we link these two analyses to identify the factors that drive the dynamics of change in digital cultural consumption habits.
This short essay explores how the notion of hacktivism changes due to easily accessible, military grade Privacy Enhancing Technologies (PETs). Privacy Enhancing Technologies, technological tools which provide anonymous communications and protect users from online surveillance enable new forms of online political activism. Through the short summary of the ad-hoc vigilante group Anonymous, this article describes hacktivism 1.0 as electronic civil disobedience conducted by outsiders. Through the analysis of Wikileaks, the anonymous whistleblowing website, it describes how strong PETs enable the development of hacktivism 2.0, where the source of threat is shifted from outsiders to insiders. Insiders have access to documents with which power can be exposed, and who, by using PETs, can anonymously engage in political action. We also describe the emergence of a third generation of hacktivists who use PETs to disengage and create their own autonomous spaces rather than to engage with power through anonymous whistleblowing.