Online political microtargeting involves monitoring people’s online behaviour, and using the collected data, sometimes enriched with other data, to show people-targeted political advertisements. Online political microtargeting is widely used in the US; Europe may not be far behind. This paper maps microtargeting’s promises and threats to democracy. For example, microtargeting promises to optimise the match between the electorate’s concerns and political campaigns, and to boost campaign engagement and political participation. But online microtargeting could also threaten democracy. For instance, a political party could, misleadingly, present itself as a different one-issue party to different individuals. And data collection for microtargeting raises privacy concerns. We sketch possibilities for policymakers if they seek to regulate online political microtargeting. We discuss which measures would be possible, while complying with the right to freedom of expression under the European Convention on Human Rights.
Political micro-targeting (PMT) has become a popular topic both in academia and in the public discussions after the surprise results of the 2016 US presidential election, the UK vote on leaving the European Union, and a number of general elections in Europe in 2017. Yet, we still know little about whether PMT is a tool with such destructive potential that it requires close societal control, or if it’s “just” a new phenomenon with currently unknown capacities, but which can ultimately be incorporated into our political processes. In this article we identify the points where we think we need to further develop our analytical capacities around PMT. We argue that we need to decouple research from the US context, and through more non-US and comparative research we need to develop a better understanding of the macro, meso, and micro level factors that affect the adoption and success of PMTs across different countries. One of the most under-researched macro-level factors is law. We argue that PMT research must develop a better understanding of law, especially in Europe, where the regulatory frameworks around platforms, personal data, political and commercial speech do shape the use and effectiveness of PMT. We point out that the incorporation of such new factors calls for the sophistication of research designs, which currently rely too much on qualitative methods, and use too little of the data that exists on PMT. And finally, we call for distancing PMT research from the hype surrounding the new PMT capabilities, and the moral panics that quickly develop around its uses.
Algorithmic agents permeate every instant of our online existence. Based on our digital profiles built from the massive surveillance of our digital existence, algorithmic agents rank search results, filter our emails, hide and show news items on social networks feeds, try to guess what products we might buy next for ourselves and for others, what movies we want to watch, and when we might be pregnant. Algorithmic agents select, filter, and recommend products, information, and people; they increasingly customize our physical environments, including the temperature and the mood. Increasingly, algorithmic agents don’t just select from the range of human created alternatives, but also they create. Burgeoning algorithmic agents are capable of providing us with content made just for us, and engage with us through one-of-a-kind, personalized interactions. Studying these algorithmic agents presents a host of methodological, ethical, and logistical challenges. The objectives of our paper are two-fold. The first aim is to describe one possible approach to researching the individual and societal effects of algorithmic recommenders, and to share our experiences with the academic community. The second is to contribute to a more fundamental discussion about the ethical and legal issues of “tracking the trackers”, as well as the costs and trade-offs involved. Our paper will contribute to the discussion on the relative merits, costs and benefits of different approaches to ethically and legally sound research on algorithmic governance. We will argue that besides shedding light on how users interact with algorithmic agents, we also need to be able to understand how different methods of monitoring our algorithmically controlled digital environments compare to each other in terms of costs and benefits. We conclude our article with a number of concrete suggestions for how to address the practical, ethical and legal challenges of researching algorithms and their effects on users and society.
Beleidsmakers, wetenschappers en anderen vrezen dat gepersonaliseerd nieuws kan leiden tot filter bubbles, unieke informatieruimtes voor iedereen. Filter bubbles zouden een gevaar vormen voor onze democratie. Op basis van de politieke voorkeuren van een gebruiker kan een gepersonaliseerde nieuwssite bepaalde onderwerpen of meningen bijvoorbeeld een meer of minder prominente plek geven. Er wordt gedacht dat personalisatie tot een nieuwe vorm van verzuiling kan leiden, waarbij gebruikers van online gepersonaliseerd nieuws weinig verschillende politieke ideeën tegenkomen. In deze bijdrage bespreken we empirisch onderzoek naar de omvang en effecten van personalisatie. Hierbij onderscheiden we zelfgeselecteerde personalisatie, waarbij mensen expliciet aangeven over welke onderwerpen zij informatie willen ontvangen, en vooraf geselecteerde personalisatie, waarbij algoritmes bepalen over welke onderwerpen gebruikers informatie ontvangen. We concluderen dat er tot nu toe weinig empirisch bewijs is dat de zorgen over filter bubbles rechtvaardigt.
Some fear that personalised communication can lead to information cocoons or filter bubbles. For instance, a personalised news website could give more prominence to conservative or liberal media items, based on the (assumed) political interests of the user. As a result, users may encounter only a limited range of political ideas. We synthesise empirical research on the extent and effects of self-selected personalisation, where people actively choose which content they receive, and pre-selected personalisation, where algorithms personalise content for users without any deliberate user choice. We conclude that at present there is little empirical evidence that warrants any worries about filter bubbles.
14 July 2015.
This paper explores the social, demographic and attitudinal basis of consumer support to a change from the status quo in digital cultural distribution. First we identify how different online and offline, legal and illegal, free and paying content acquisition channels are used in the Dutch media market using a cluster-based classification of respondents according to their cultural consumption. Second, we assess the effect of cultural consumption on the support to the introduction of a Copyright Compensation System (CCS), which, for a small monthly fee would legalize currently infringing online social practices such as private copying from illegal sources and online sharing of copyrighted works. Finally, we link these two analyses to identify the factors that drive the dynamics of change in digital cultural consumption habits.
This short essay explores how the notion of hacktivism changes due to easily accessible, military grade Privacy Enhancing Technologies (PETs). Privacy Enhancing Technologies, technological tools which provide anonymous communications and protect users from online surveillance enable new forms of online political activism. Through the short summary of the ad-hoc vigilante group Anonymous, this article describes hacktivism 1.0 as electronic civil disobedience conducted by outsiders. Through the analysis of Wikileaks, the anonymous whistleblowing website, it describes how strong PETs enable the development of hacktivism 2.0, where the source of threat is shifted from outsiders to insiders. Insiders have access to documents with which power can be exposed, and who, by using PETs, can anonymously engage in political action. We also describe the emergence of a third generation of hacktivists who use PETs to disengage and create their own autonomous spaces rather than to engage with power through anonymous whistleblowing.