Frederik Zuiderveen Borgesius
Online stores can present a different price to each customer. Such algorithmic personalised pricing can lead to advanced forms of price discrimination based on the characteristics and behaviour of individual consumers. We conducted two consumer surveys among a representative sample of the Dutch population (N=1233 and N=1202), to analyse consumer attitudes towards a list of examples of price discrimination and dynamic pricing. A vast majority finds online price discrimination unfair and unacceptable, and thinks it should be banned. However, some pricing strategies that have been used by companies for decades are almost equally unpopular. We analyse the results to better understand why people dislike many types of price discrimination.
This article introduces U.S. lawyers and academics to the normative foundations, attributes, and strategic approach to regulating personal data advanced by the European Union’s General Data Protection Regulation (“GDPR”). We explain the genesis of the GDPR, which is best understood as an extension and refinement of existing requirements imposed by the 1995 Data Protection Directive; describe the GDPR’s approach and provisions; and make predictions about the GDPR’s short and medium-term implications. The GDPR is the most consequential regulatory development in information policy in a generation. The GDPR brings personal data into a detailed and protective regulatory regime, which will influence personal data usage worldwide. Understood properly, the GDPR encourages firms to develop information governance frameworks, to in-house data use, and to keep humans in the loop in decision making. Companies with direct relationships with consumers have strategic advantages under the GDPR, compared to third party advertising firms on the internet. To reach these objectives, the GDPR uses big sticks, structural elements that make proving violations easier, but only a few carrots. The GDPR will complicate and restrain some information-intensive business models. But the GDPR will also enable approaches previously impossible under less-protective approaches.
This report, written for the Anti-discrimination department of the Council of Europe, concerns discrimination caused by algorithmic decision-making and other types of artificial intelligence (AI). AI advances important goals, such as efficiency, health and economic growth but it can also have discriminatory effects, for instance when AI systems learn from biased human decisions. In the public and the private sector, organisations can take AI-driven decisions with farreaching effects for people. Public sector bodies can use AI for predictive policing for example, or for making decisions on eligibility for pension payments, housing assistance or unemployment benefits. In the private sector, AI can be used to select job applicants, and banks can use AI to decide whether to grant individual consumers credit and set interest rates for them. Moreover, many small decisions, taken together, can have large effects. By way of illustration, AI-driven price discrimination could lead to certain groups in society consistently paying more. The most relevant legal tools to mitigate the risks of AI-driven discrimination are nondiscrimination law and data protection law. If effectively enforced, both these legal tools could help to fight illegal discrimination. Council of Europe member States, human rights monitoring bodies, such as the European Commission against Racism and Intolerance, and Equality Bodies should aim for better enforcement of current nondiscrimination norms. But AI also opens the way for new types of unfair differentiation (some might say discrimination) that escape current laws. Most non-discrimination statutes apply only to discrimination on the basis of protected characteristics, such as skin colour. Such statutes do not apply if an AI system invents new classes, which do not correlate with protected characteristics, to differentiate between people. Such differentiation could still be unfair, however, for instance when it reinforces social inequality. We probably need additional regulation to protect fairness and human rights in the area of AI. But regulating AI in general is not the right approach, as the use of AI systems is too varied for one set of rules. In different sectors, different values are at stake, and different problems arise. Therefore, sector-specific rules should be considered. More research and debate are needed.
In the European Union, the General Data Protection Regulation (GDPR) provides comprehensive rules for the processing of personal data. In addition, the EU lawmaker intends to adopt specific rules to protect confidentiality of communications, in a separate ePrivacy Regulation. Some have argued that there is no need for such additional rules for communications confidentiality. This paper discusses the protection of the right to confidentiality of communications in Europe. We look at the right’s origins as a fundamental right to assess the rationale for protecting the right. We also analyse how the right is currently protected under the European Convention on Human Rights and under EU law. We show that the right to communications confidentiality protects three values: trust in communication services, privacy, and freedom of expression. The right aims to ensure that individuals and businesses can safely entrust communication to service providers. Initially, the right protected only postal letters, but it has gradually developed into a strong safeguard for the protection of confidentiality of communications, regardless of the technology used. Hence, the right does not merely serve individual privacy interests, but also other interests that are crucial for the functioning of our information society. We conclude that separate EU rules to protect communications confidentiality, next to the GDPR, are justified and necessary to protect trust, privacy and freedom and expression.
Online political microtargeting involves monitoring people’s online behaviour, and using the collected data, sometimes enriched with other data, to show people-targeted political advertisements. Online political microtargeting is widely used in the US; Europe may not be far behind. This paper maps microtargeting’s promises and threats to democracy. For example, microtargeting promises to optimise the match between the electorate’s concerns and political campaigns, and to boost campaign engagement and political participation. But online microtargeting could also threaten democracy. For instance, a political party could, misleadingly, present itself as a different one-issue party to different individuals. And data collection for microtargeting raises privacy concerns. We sketch possibilities for policymakers if they seek to regulate online political microtargeting. We discuss which measures would be possible, while complying with the right to freedom of expression under the European Convention on Human Rights.
On the internet, we encounter take-it-or-leave-it choices regarding our privacy on a daily basis. In Europe, online tracking for targeted advertising generally requires the internet users’ consent to be lawful. Some websites use a tracking wall, a barrier that visitors can only pass if they consent to tracking by third parties. When confronted with such a tracking wall, many people click ‘I agree’ to tracking. A survey that we conducted shows that most people find tracking walls unfair and unacceptable. We analyse under which conditions the ePrivacy Directive and the General Data Protection Regulation allow tracking walls. We provide a list of circumstances to assess when a tracking wall makes consent invalid. We also explore how the EU lawmaker could regulate tracking walls, for instance in the ePrivacy Regulation. It should be seriously considered to ban tracking walls, at least in certain circumstances.
Draft version. Final version published in Common Market Law Review, 2017, nr. 5, p. 1427-1466.
In modern markets, many companies offer so-called “free” services and monetize consumer data they collect through those services. This paper argues that consumer law and data protection law can usefully complement each other. Data protection law can also inform the interpretation of consumer law. Using consumer rights, consumers should be able to challenge excessive collection of their personal data. Consumer organizations have used consumer law to tackle data protection infringements. The interplay of data protection law and consumer protection law provides exciting opportunities for a more integrated vision on “data consumer law”.
Online shops could offer each website customer a different price. Such personalized pricing can lead to advanced forms of price discrimination based on individual characteristics of consumers, which may be provided, obtained, or assumed. An online shop can recognize customers, for instance through cookies, and categorize them as price-sensitive or price-insensitive. Subsequently, it can charge (presumed) price-insensitive people higher prices. This paper explores personalized pricing from a legal and an economic perspective. From an economic perspective, there are valid arguments in favour of price discrimination, but its effect on total consumer welfare is ambiguous. Irrespectively, many people regard personalized pricing as unfair or manipulative. The paper analyses how this dislike of personalized pricing may be linked to economic analysis and to other norms or values. Next, the paper examines whether European data protection law applies to personalized pricing. Data protection law applies if personal data are processed, and this paper argues that that is generally the case when prices are personalized. Data protection law requires companies to be transparent about the purpose of personal data processing, which implies that they must inform customers if they personalize prices. Subsequently, consumers have to give consent. If enforced, data protection law could thereby play a significant role in mitigating any adverse effects of personalized pricing. It could help to unearth how prevalent personalized pricing is and how people respond to transparency about it.
In zijn arrest laat de Hoge Raad zich uit over hoe de rechten op privacy en gegevensbescherming zich verhouden tot het recht op vrijheid van meningsuiting.
On 20 June 2017, Axel Arnbak and Frederik Zuiderveen Borgesius spoke at the Dutch Senate (Eerste Kamer) at an Expert Meeting on Privacy. The meeting focused on two bills, 'Computercriminaliteit III' (Computer Crime III, concerning, among other things, hacking by the police) and 'Vastleggen en bewaren kentekengegevens door politie' (on the use of automatic number plate recognition cameras by the police).
This study, commissioned by the European Parliament’s Policy Department for Citizens’ Rights and Constitutional Affairs at the request of the LIBE Committee, appraises the European Commission’s proposal for an ePrivacy Regulation. The study assesses whether the proposal would ensure that the right to the protection of personal data, the right to respect for private life and communications, and related rights enjoy a high standard of protection. The study also highlights the proposal’s potential benefits and drawbacks more generally.
Algorithmic agents permeate every instant of our online existence. Based on our digital profiles built from the massive surveillance of our digital existence, algorithmic agents rank search results, filter our emails, hide and show news items on social networks feeds, try to guess what products we might buy next for ourselves and for others, what movies we want to watch, and when we might be pregnant. Algorithmic agents select, filter, and recommend products, information, and people; they increasingly customize our physical environments, including the temperature and the mood. Increasingly, algorithmic agents don’t just select from the range of human created alternatives, but also they create. Burgeoning algorithmic agents are capable of providing us with content made just for us, and engage with us through one-of-a-kind, personalized interactions. Studying these algorithmic agents presents a host of methodological, ethical, and logistical challenges. The objectives of our paper are two-fold. The first aim is to describe one possible approach to researching the individual and societal effects of algorithmic recommenders, and to share our experiences with the academic community. The second is to contribute to a more fundamental discussion about the ethical and legal issues of “tracking the trackers”, as well as the costs and trade-offs involved. Our paper will contribute to the discussion on the relative merits, costs and benefits of different approaches to ethically and legally sound research on algorithmic governance. We will argue that besides shedding light on how users interact with algorithmic agents, we also need to be able to understand how different methods of monitoring our algorithmically controlled digital environments compare to each other in terms of costs and benefits. We conclude our article with a number of concrete suggestions for how to address the practical, ethical and legal challenges of researching algorithms and their effects on users and society.
Forthcoming in J. Polonetsky, O. Tene, E. Selinger (ed.), Cambridge Handbook of Consumer Privacy, Cambridge University Press 2017
In this chapter we discuss the relation between privacy and freedom of expression in Europe. In principle, the two rights have equal weight in Europe – which right prevails depends on the circumstances of a case. We use the Google Spain judgment of the Court of Justice of the European Union, sometimes called the ‘right to be forgotten’ judgment, to illustrate the difficulties when balancing the two rights. The court decided in Google Spain that people have, under certain conditions, the right to have search results for their name delisted. We discuss how Google and Data Protection Authorities deal with such delisting requests in practice. Delisting requests illustrate that balancing privacy and freedom of expression interests will always remain difficult.
Een advocaat heeft een ‘right to be forgotten’-verzoek gedaan bij Google, met betrekking tot een blogpost over een strafrechtelijke veroordeling van de advocaat in het buitenland. De Rechtbank Rotterdam heeft beslist dat Google niet meer naar de blogpost mag verwijzen als mensen zoeken op de naam van de advocaat. De rechtbank wees het verwijderingsverzoek toe omdat de blogpost een strafrechtelijke veroordeling betreft: een bijzonder persoonsgegeven. De redenering van de rechtbank over bijzondere persoonsgegevens leidt tot problemen voor de vrijheid van meningsuiting. Deze bijdrage verkent hoe die problemen verkleind kunnen worden.
Beleidsmakers, wetenschappers en anderen vrezen dat gepersonaliseerd nieuws kan leiden tot filter bubbles, unieke informatieruimtes voor iedereen. Filter bubbles zouden een gevaar vormen voor onze democratie. Op basis van de politieke voorkeuren van een gebruiker kan een gepersonaliseerde nieuwssite bepaalde onderwerpen of meningen bijvoorbeeld een meer of minder prominente plek geven. Er wordt gedacht dat personalisatie tot een nieuwe vorm van verzuiling kan leiden, waarbij gebruikers van online gepersonaliseerd nieuws weinig verschillende politieke ideeën tegenkomen. In deze bijdrage bespreken we empirisch onderzoek naar de omvang en effecten van personalisatie. Hierbij onderscheiden we zelfgeselecteerde personalisatie, waarbij mensen expliciet aangeven over welke onderwerpen zij informatie willen ontvangen, en vooraf geselecteerde personalisatie, waarbij algoritmes bepalen over welke onderwerpen gebruikers informatie ontvangen. We concluderen dat er tot nu toe weinig empirisch bewijs is dat de zorgen over filter bubbles rechtvaardigt.
In: Handboek consumentenrecht: een overzicht van de rechtspositie van de consument, E.H. Hondius & G.J. Rijken (red.), Zutphen: Paris 2015, p. 483-497.
In the YS. and M. and S. judgment, the Court of Justice of the European Union ruled on three procedures in which Dutch judges asked for clarification on the right of asylum seekers to have access to the documents regarding the decision on asylum applications. The judgment is relevant for interpreting the concept of personal data and the scope of the right of access under the Data Protection Directive, and the right to good administration in the EU Charter of Fundamental Rights. At first glance, the judgment seems disappointing from the viewpoint of individual rights. Nevertheless, in our view the judgment provides sufficient grounds for effective access rights to the minutes in future asylum cases.
Informed consent as a means to protect privacy is flawed, especially when considering the privacy problems of behavioral targeting. Policymakers should pay more attention to a combined approach that both protects and empowers individuals.
In Europa is het gegevensbeschermingsrecht het belangrijkste juridische instrument om privacy te beschermen en een behoorlijke omgang met persoonsgegevens te bevorderen. Het gegevensbeschermingsrecht is alleen van toepassing als ‘persoonsgegevens’ verwerkt worden. Er is veel discussie over de vraag of het gegevensbeschermingsrecht van toepassing is als bedrijven gegevens over mensen verwerken maar daar geen naam aan koppelen. Zulke gegevens worden bijvoorbeeld gebruikt voor behavioural targeting. Bij deze marketingtechniek, een vorm van gepersonaliseerde communicatie, volgen bedrijven het gedrag van mensen op het internet, en gebruiken ze de verzamelde informatie om mensen gerichte advertenties te tonen. Deze bijdrage analyseert de discussie over de reikwijdte van het begrip ‘persoonsgegeven’, en trekt twee conclusies. Ten eerste blijkt uit een analyse van het geldende recht, in ieder geval volgens de interpretatie van de Europese privacytoezichthouders, dat de regels voor persoonsgegevens doorgaans van toepassing zijn op behavioural targeting. Ten tweede zouden die regels ook vanuit een normatief perspectief van toepassing moeten zijn.
Trouw, De Verdieping, 27 mei 2016, p. 2.
Om afwijkend gedrag bij burgers te herkennen, koppelt de overheid allerlei gegevens uit haar databases en laat er analyses op los. Wat betekent dat voor de relatie tussen burger en die overheid?
Some fear that personalised communication can lead to information cocoons or filter bubbles. For instance, a personalised news website could give more prominence to conservative or liberal media items, based on the (assumed) political interests of the user. As a result, users may encounter only a limited range of political ideas. We synthesise empirical research on the extent and effects of self-selected personalisation, where people actively choose which content they receive, and pre-selected personalisation, where algorithms personalise content for users without any deliberate user choice. We conclude that at present there is little empirical evidence that warrants any worries about filter bubbles.
Information about millions of people is collected for behavioural targeting, a type of marketing that involves tracking people’s online behaviour for targeted advertising. It is hotly debated whether data protection law applies to behavioural targeting. Many behavioural targeting companies say that, as long as they do not tie names to data they hold about individuals, they do not process any personal data, and that, therefore, data protection law does not apply to them. European Data Protection Authorities, however, take the view that a company processes personal data if it uses data to single out a person, even if it cannot tie a name to these data. This paper argues that data protection law should indeed apply to behavioural targeting. Companies can often tie a name to nameless data about individuals. Furthermore, behavioural targeting relies on collecting information about individuals, singling out individuals, and targeting ads to individuals. Many privacy risks remain, regardless of whether companies tie a name to the information they hold about a person. A name is merely one of the identifiers that can be tied to data about a person, and it is not even the most practical identifier for behavioural targeting. Seeing data used to single out a person as personal data fits the rationale for data protection law: protecting fairness and privacy.
We use electronic communication networks for more than simply traditional telecommunications: we access the news, buy goods online, file our taxes, contribute to public debate, and more. As a result, a wider array of privacy interests is implicated for users of electronic communications networks and services. . This development calls into question the scope of electronic communications privacy rules. This paper analyses the scope of these rules, taking into account the rationale and the historic background of the European electronic communications privacy framework. We develop a framework for analysing the scope of electronic communications privacy rules using three approaches: (i) a service-centric approach, (ii) a data-centric approach, and (iii) a value-centric approach. We discuss the strengths and weaknesses of each approach. The current e-Privacy Directive contains a complex blend of the three approaches, which does not seem to be based on a thorough analysis of their strengths and weaknesses. The upcoming review of the directive announced by the European Commission provides an opportunity to improve the scoping of the rules.
Open data are held to contribute to a wide variety of social and political goals, including strengthening transparency, public participation and democratic accountability, promoting economic growth and innovation, and enabling greater public sector efficiency and cost savings. However, releasing government data that contain personal information may threaten privacy and related rights and interests. In this paper we ask how these privacy interests can be respected, without unduly hampering benefits from disclosing public sector information. We propose a balancing framework to help public authorities address this question in different contexts. The framework takes into account different levels of privacy risks for different types of data. It also separates decisions about access and re-use, and highlights a range of different disclosure routes. A circumstance catalogue lists factors that might be considered when assessing whether, under which conditions, and how a dataset can be released. While open data remains an important route for the publication of government information, we conclude that it is not the only route, and there must be clear and robust public interest arguments in order to justify the disclosure of personal information as open data.
This paper discusses the regulation of mass metadata surveillance in Europe through the lens of the landmark judgment in which the Court of Justice of the European Union struck down the Data Retention Directive. The controversial directive obliged telecom and Internet access providers in Europe to retain metadata of all their customers for intelligence and law enforcement purposes, for a period of up to two years. In the ruling, the Court declared the directive in violation of the human rights to privacy and data protection. The Court also confirmed that the mere collection of metadata interferes with the human right to privacy. In addition, the Court developed three new criteria for assessing the level of data security required from a human rights perspective: security measures should take into account the risk of unlawful access to data, and the data’s quantity and sensitivity. While organizations that campaigned against the directive have welcomed the ruling, we warn for the risk of proceduralization of mass surveillance law. The Court did not fully condemn mass surveillance that relies on metadata, but left open the possibility of mass surveillance if policymakers lay down sufficient procedural safeguards. Such proceduralization brings systematic risks for human rights. Government agencies, with ample resources, can design complicated systems of procedural oversight for mass surveillance – and claim that mass surveillance is lawful, even if it affects millions of innocent people.
Since the Google Spain judgment of the Court of Justice of the European Union, Europeans have, under certain conditions, the right to have search results for their name delisted. This paper examines how the Google Spain judgment has been applied in the Netherlands. Since the Google Spain judgment, Dutch courts have decided on two cases regarding delisting requests. In both cases, the Dutch courts considered freedom of expression aspects of delisting more thoroughly than the Court of Justice. However, the effect of the Google Spain judgment on freedom of expression is difficult to assess, as search engine operators decide about most delisting requests without disclosing much about their decisions.
Forthcoming as a conference paper for the Amsterdam Privacy Conference 23-26 October 2015.
Online shops can offer each website customer a different price – a practice called first degree price discrimination, or personalised pricing. An online shop can recognise a customer, for instance through a cookie, and categorise the customer as a rich or a poor person. The shop could, for instance, charge rich people higher prices. From an economic perspective, there are good arguments in favour of price discrimination. But many regard price discrimination as unfair or manipulative. This paper examines whether European data protection law applies to personalised pricing. Data protection law applies if personal data are processed. This paper argues that personalised pricing generally entails the processing of personal data. Therefore, data protection law generally applies to personalised pricing. That conclusion has several implications. For instance, data protection law requires a company to inform people about the purpose of processing their personal data. A company must inform customers if it personalises prices.
Hoofdstuk in: Handboek Consumentenrecht: Een overzicht van de rechtspositie van de consument, prof. mr. E.H. Hondius, mr. G.J. Rijken (red.), 2015.
In the YS. and M. and S. judgment, the Court of Justice of the European Union ruled on three procedures in which Dutch judges asked for clarification on the right of asylum seekers to have access to the documents regarding the decision on asylum applications. The judgment is relevant for interpreting the concept of personal data and the scope of the right of access under the Data Protection Directive, and the right to good administration in the eu Charter of Fundamental Rights. At first glance, the judgment seems disappointing from the viewpoint of individual rights. Nevertheless, in our view the judgment provides sufficient grounds for effective access rights to the minutes in future asylum cases.
The European Union Charter of Fundamental Rights only allows personal data processing if a data controller has a legal basis for the processing.
This paper argues that, in most circumstances, the only available legal basis for the processing of personal data for behavioural targeting is the data subject's unambiguous consent.
Furthermore, the paper argues that the cookie consent requirement from the e-Privacy Directive does not provide a legal basis for the processing of personal data.
Therefore, even if companies could use an opt-out system to comply with the e-Privacy Directive's consent requirement for using a tracking cookie, they would generally have to obtain the data subject's unambiguous consent if they process personal data for behavioural targeting.
Alphen aan den Rijn, Kluwer Law International, 2015, 432 pp.
This book provides you with a highly readable overview of the policy issues underlying behavioural targeting, and explains how the law could improve on privacy protection.
In deze bijdrage wordt het Google Spain-arrest van het Hof van Justitie van de Europese Unie besproken, evenals de ontwikkelingen na het arrest. Centraal staat de vraag naar de gevolgen van het arrest voor de vrijheid van meningsuiting. De auteurs betogen dat het Hof onvoldoende aandacht schenkt aan de vrijheid van meningsuiting.
In theorie lijkt de bescherming van persoonsgegevens op orde: internetbedrijven moeten mensen informeren over wat er met hun gegevens gebeurt, en doorgaans toestemming vragen voor ze die gegevens gebruiken. Maar in de praktijk schiet die ‘geïnformeerde toestemming’ als privacybeschermingsmaatregel tekort. Om privacy beter te beschermen moet volgens onderzoeker Frederik Borgesius de privacywetgeving beter worden nageleefd en gehandhaafd én op de schop. Hij pleit voor een breder privacydebat. “We móeten dat mijnenveld in.”
De huidige privacyregels leggen veel nadruk op de geïnformeerde toestemming van internetgebruikers. Met zulke toestemmingsregels probeert de wet mensen in staat te stellen om keuzes te maken in hun eigen belang. Maar inzichten uit gedragsstudies trekken de effectiviteit van deze wetgevingstactiek in twijfel. Zo klikken internetgebruikers in de praktijk 'OK' op vrijwel elk toestemmingsverzoek dat op hun scherm verschijnt. De wet zou meer aandacht moeten geven aan de daadwerkelijke bescherming van de privacy van mensen die het internet opgaan.
Frederik Borgesius te gast bij het programma Kassa over websites en apps voor kinderen. Zie ook het artikel in De Correspondent van Dimitri Tokmetzis, Dit zijn de virtuele stalkers van uw kind, waar o.a. Ot van Daalen aan het woord komt.
In: New Forms of Commercial Communications in a Converged Audiovisual Sector, IRIS Special, p. 67-76.
Ook beschikbaar in het Duits en Frans.
Short summary of PhD thesis in English and Dutch.
In this note we discuss the controversial judgment in Google Spain v. González of the Court of Justice of the European Union (CJEU). Our focus is on the judgment’s implications for freedom of expression. First, the facts of the case and the CJEU’s judgment are summarised. We then argue that the CJEU did not give enough attention to the right to freedom of expression. By seeing a search engine operator as a controller regarding the processing of personal data on third party web pages, the CJEU assigns the operator the delicate task of balancing the fundamental rights at stake. However, such an operator may not be the most appropriate party to balance the rights of all involved parties, in particular in cases where such a balance is hard to strike. Furthermore, it is a departure from human rights doctrine that according to the CJEU privacy and data protection rights override, “as a rule”, the public’s right to receive information. In addition, after the judgement it has become unclear whether search engine operators have a legal basis for indexing websites that contain special categories of data. We also discuss steps taken by Google to comply with the judgment.
Draft chapter for the book 'Nudging and the Law - What can EU Law learn from Behavioural Sciences?', editors A-L. Sibony & A. Alemanno, Hart Publishing.
This chapter examines the policy implications of behavioural sciences insights for the regulation of privacy on the Internet, by focusing in particular on behavioural targeting. This marketing technique involves tracking people’s online behaviour to use the collected information to show people individually targeted advertisements. Enforcing data protection law may not be enough to protect privacy in this area. I argue that, if society is better off when certain behavioural targeting practices do not happen, policymakers should consider banning them.
Uitspraak van de maand: HvJEU, Y.S. en M. en S. tegen Minister voor Immigratie, Integratie en Asiel, C-141/12 en C-372/12.
Conference (draft) paper for Privacy Law Scholars Conference (PLSC), 6-7 June 2013, Berkeley, United States
Artikel ook gepubliceerd in European Intellectual Property Review, 2012-11, p. 54-58.
Sabam, a Belgian collective rights management organisation, wanted an internet access provider and a social network site to install a filter system to enforce copyrights. In two recent judgments, the Court of Justice of the European Union decided that the social network site and the internet access provider cannot be required to install the filter system that Sabam asked for. Are these judgments good news for fundamental rights? This article argues that little is won for privacy and freedom of information.
Wat zijn onze gegevens waard?, NOS Radio Journaal, 24 mei 2013. http://nos.nl/op3/audio/510479-wat-zijn-onze-gegevens-waard.html Pingegevens: wat zijn ze waard?, NOS Televisie Journaal, 24 mei 2013. http://nos.nl/op3/video/510638-pingegevens-wat-zijn-ze-waard.html
Position paper for the W3C Do Not Track Workshop, november 2012. http://www.w3.org/2012/dnt-ws/agenda.html
Aanbieders van online e-maildiensten zoals Gmail, Hotmail en Yahoo!, bieden een steeds grotere opslagcapaciteit aan hun abonnees, hetgeen de feitelijke beschikkingsmacht van deze aanbieders over de bij hun opgeslagen communicatie vergroot. De honger naar informatie van de aanbieders van dergelijke online communicatiediensten die vaak afhankelijk zijn van advertentie-inkomsten, wordt ruimschoots gevoed door de consument die gretig gebruik maakt van de veelal gratis aangeboden en haast ongelimiteerde opslagcapaciteit die hen in staat stelt om al hun communicatie vanaf iedere gewenste plek te raadplegen. Nu de generatie abonnees die via e-mail communiceert langzamerhand ouder begint te worden, rijst de vraag wat er zal gebeuren met de e-mailcommunicatie die staat opgeslagen bij de aanbieder na overlijden van de abonnee. De centrale vraag van dit artikel luidt dan ook: in hoeverre wordt het privacybelang van de abonnee van een online e-maildienst en diens communicatiepartners beschermd en kunnen zij dit belang beschermen als de abonnee komt te overlijden? Uit onze inventarisatie van de relevante wetgeving, jurisprudentie en literatuur blijkt dat de bescherming van de privacybelangen van de overleden abonnee en zijn communicatiepartners onduidelijk en zelfs gebrekkig is.