- Tarlach McGonagle gives keynotes on hate speech and speaks at EU’s Annual Colloquium on Fundamental Rights
- Karlijn van den Heuvel wint “Internet & Recht” scriptieprijs 2018
- Gerard Mom 1950-2018
The Truth dominates many public discussions today. Conventional truths from established epistemic authorities about all sorts of issues, from climate change to terrorist attacks, are increasingly challenged by ordinary citizens and presidents alike. Many have therefore proclaimed that we have entered a post-truth era: a world in which objective facts are no longer relevant. Media and politics speak in alarmist discourse about how fake news, conspiracy theories and alternative facts threaten democratic societies by destabilizing the Truth ‐ a clear sign of a moral panic. In this essay, I firstly explore what sociological changes have led to (so much commotion about) the alleged demise of the Truth. In contrast to the idea that we have moved beyond it, I argue that we are amidst public battles about the Truth: at stake is who gets to decide over that and why. I then discuss and criticize the dominant counter reaction (re-establishing the idea of one objective and irrefutable truth), which I see as an unsuccessful de-politisation strategy. Basing myself on research and experiments with epistemic democracy in the field of science studies, I end with a more effective and democratic alternative of how to deal with knowledge in the complex information landscape of today.
The deployment of various forms of AI, most notably of machine learning algorithms, radically transforms many domains of social life. In this paper we focus on the news industry, where different algorithms are used to customize news offerings to increasingly specific audience preferences. While this personalization of news enables media organizations to be more receptive to their audience, it can be questioned whether current deployments of algorithmic news recommenders (ANR) live up to their emancipatory promise. Like in various other domains, people have little knowledge of what personal data is used and how such algorithmic curation comes about, let alone that they have any concrete ways to influence these data-driven processes. Instead of going down the intricate avenue of trying to make ANR more transparent, we explore in this article ways to give people more influence over the information news recommendation algorithms provide by thinking about and enabling possibilities to express voice. After differentiating four ideal typical modalities of expressing voice (alternation, awareness, adjustment and obfuscation) which are illustrated with currently existing empirical examples, we present and argue for algorithmic recommender personae as a way for people to take more control over the algorithms that curate people's news provision.
New technologies, purposes and applications to process individuals’ personal data are being developed on a massive scale. But we have not only entered the ‘golden age of personal data’ in terms of its exploitation: ours is also the ‘golden age of personal data’ in terms of regulation of its use. Understood as an enabling right, the architecture of EU data protection law is capable of protecting against many of the negative short- and long-term effects of contemporary data processing. Against the backdrop of big data applications, we evaluate how the implementation of privacy and data protection rules protect against the short- and long-term effects of contemporary data processing. We conclude that from the perspective of protecting individual fundamental rights and freedoms, it would be worthwhile to explore alternative (legal) approaches instead of relying on EU data protection law alone to cope with contemporary data processing.
The purpose of this paper is to explore the risks of privatised enforcement in the field of terrorism propaganda, stemming from the EU Code of conduct on countering illegal hate speech online. By shedding light on this Code, the author argues that implementation of it may undermine the rule of law and give rise to private censorship. In order to outweigh these risks, IT companies should improve their transparency, especially towards users whose content have been affected. Where automated means are used, the companies should always have in place some form of human intervention in order to contextualise posts. At the EU level, the Commission should provide IT companies with clearer guidelines regarding their liability exemption under the e-Commerce Directive. This would help prevent a race-to-the bottom where intermediaries choose to interpret and apply the most stringent national laws in order to secure at utmost their liability. The paper further articulates on the fine line that exists between ‘terrorist content’ and ‘illegal hate speech’ and the need for more detailed definitions.
This book compares the results of twenty years of international media assistance in the five countries of the western Balkans. It asks what happens to imported models when they are applied to newly evolving media systems in societies in transition. Albania, Bosnia-Herzegovina, Kosovo, Macedonia, and Serbia undertook a range of media reforms to conform with accession requirements of the European Union and the standards of the Council of Europe, among others. The essays explore the nexus between the democratic transformation of the media and international media assistance in these countries. The cross-national analysis concludes that the effects of international assistance are highly constrained by local contexts. In hindsight it becomes clear that escalating media assistance does not necessarily improve outcomes.
In this chapter we argue that the right to data protection is the posterchild of EU citizenship in the digital era. We start by providing a brief overview of the gradual construction of the right to personal data protection in the EU. We then identify a range of actors who have played a particular role in the building process, including EU citizens themselves. Next, we review the current legal ‘architecture’ of the right to the protection of personal data and discuss whether it could serve as a model for the future development of EU citizenship, notwithstanding remaining challenges at the level of national implementation and public and private compliance with EU rules. Finally, we reflect on the future of the right to data protection, and its contribution to the development of EU citizenship as a legal regime.
Opinie in Het Parool, 23 oktober 2018.
Keynote at KEI Seminar, Appraising the WIPO Broadcast Treaty and its Implications on Access to Culture, Geneva 3-4 October 2018
IRIS Special, European Audiovisual Observatory: Strasbourg, 2018, 150 pp.
Separating the facts from the fiction in today’s media is becoming mission impossible. In the era of the #fakenews hashtag, the internet, and the media in general, are concerned by the emergence of fiction which is sometimes much stranger than truth! So what rules and initiatives exist in Europe to help ensure the accuracy and objectivity of news and current affairs reporting? How far can the European and the various national legislators go to protect us from dubious reporting or at least ensure that codes of good conduct exist?
Prompted by the ongoing development of content personalization by social networks and mainstream news brands, and recent debates about balancing algorithmic and editorial selection, this study explores what audiences think about news selection mechanisms and why. Analysing data from a 26-country survey (N = 53,314), we report the extent to which audiences believe story selection by editors and story selection by algorithms are good ways to get news online and, using multi-level models, explore the relationships that exist between individuals’ characteristics and those beliefs. The results show that, collectively, audiences believe algorithmic selection guided by a user’s past consumption behaviour is a better way to get news than editorial curation. There are, however, significant variations in these beliefs at the individual level. Age, trust in news, concerns about privacy, mobile news access, paying for news, and six other variables had effects. Our results are partly in line with current general theory on algorithmic appreciation, but diverge in our findings on the relative appreciation of algorithms and experts, and in how the appreciation of algorithms can differ according to the data that drive them. We believe this divergence is partly due to our study’s focus on news, showing algorithmic appreciation has context-specific characteristics.
The California Consumer Privacy Act (CCPA), slated to enter into force on 1 January 2020, borrows some cutting edge ideas from the EU and others’ privacy regimes while also experimenting with new approaches to data privacy. Importantly, the CCPA envisages an online advertisement market in which business are prevented from “getting high on information,” 1 breaches are promptly notified, and consumers are autonomous participants with the ability to sell their data at will. Where the CCPA breaks new ground is in protecting consumers from retaliation for opting out of the sale of their data. Thus, if it lives up to its potential, the CCPA could catalyse a permanent restructuring of the online data mining business. Our contribution will shed light on the new CCPA and offer some observations in comparing it with EU’s General Data Protection Regulation (GDPR).
Ongoing advances in artificial intelligence (AI) are increasingly part of scientific efforts as well as the public debate and the media agenda, raising hopes and concerns about the impact of automated decision making across different sectors of our society. This topic is receiving increasing attention at both national and cross- national levels. The present report contributes to informing this public debate, providing the results of a survey with 958 participants recruited from high-quality sample of the Dutch population. It provides an overview of public knowledge, perceptions, hopes and concerns about the adoption of AI and ADM across different societal sectors in the Netherlands. This report is part of a research collaboration between the Universities of Amsterdam, Tilburg, Radboud, Utrecht and Eindhoven (TU/e) on automated decision making, and forms input to the groups’ research on fairness in automated decision making.
Chapter in: Critical Indigenous Rights Studies, G. Corradi, K. de Feyter, E. Desmet, K. Vanhees (eds.), Routlegde, 2018.
The protection of traditional cultural expressions (TCEs) is not a straightforward issue. At first sight, characteristics of TCEs and their protection suggest similarity to copyright works. However, TCE protection should not be viewed as simply an (isolated) intellectual property issue. Rather, the protection of TCEs is part of a broader (political) context and struggle for rights. The chapter focuses on showing the complexity of the interrelation between copyright and indigenous peoples’ rights. It argues that a cultural and indigenous rights perspective could help address tensions deriving from differing worldviews, the application of dominant, existing legal frameworks and diverging understandings of protecting creativity and works of culture.