Publicaties
Top Keywords
- Annotaties (57)
- Art. 10 EVRM (25)
- Auteursrecht (518)
- Bescherming van communicatie (20)
- Consumentenrecht (22)
- Content moderation (22)
- Databankenrecht (25)
- Digital Services Act (DSA) (31)
- EU (19)
- EU law (25)
- Freedom of expression (49)
- Fundamental rights (18)
- GDPR (22)
- Grondrechten (421)
- Industriële eigendom (38)
- Informatierecht (37)
- Intellectual property (30)
- Intellectuele eigendom (425)
- Internet (24)
- Journalistiek (33)
- Kluwer Information Law Series (43)
- Kronieken (18)
- Media law (29)
- Mediarecht (378)
- Mensenrechten (18)
- Merkenrecht (21)
- Naburige rechten (20)
- Omroeprecht (28)
- Online platforms (20)
- Overheidsinformatie (53)
- Personal data (35)
- Persrecht (41)
- Platforms (24)
- Privacy (327)
- Regulering (20)
- Technologie en recht (75)
- Telecommunicatierecht (127)
- Text and Data Mining (TDM) (21)
- Transparency (19)
- Vrijheid van meningsuiting (198)
Annotatie bij Rb. Den Haag 19 november 2018 (TVS / Revo) external link
Annotatie bij Hoge Raad 8 juni 2018 (Pearson) external link
Openbaarheid van tarieven en licentievoorwaarden van cbo’s external link
Abstract
Op grond van de Wet toezicht zijn collectieve beheersorganisaties verplicht
‘standaardlicentieovereenkomsten
en normaal toepasselijke tarieven’ actief openbaar
te maken. Over de betekenis en reikwijdte van deze verplichting bestaat in
Nederland onduidelijkheid en zelfs onenigheid, maar (nog) geen jurisprudentie. In dit
artikel wordt gepoogd om, mede aan de hand van een analyse van de rechtsgronden
van deze verplichting, te komen tot een juiste en in de praktijk hanteerbare
interpretatie van de openbaarheidsplicht.
Auteursrecht, collectieve beheersorganisaties, frontpage, licenties, openbaarheid, tarieven
RIS
Bibtex
Smaak, auteursrecht en objectafbakening: over perceptie en bepaalbaarheid external link
Abstract
Het hoge woord is eruit: op smaak rust geen auteursrecht. Dit volgt direct uit het
Levola/Smilde-arrest, waarin het Hof van Justitie van de Europese Unie (HvJ EU)
vaststelt dat de Auteursrechtrichtlijn (Richtlijn 2001/29) ‘eraan in de weg staat dat
de smaak van een voedingsmiddel op grond van deze richtlijn auteursrechtelijk
wordt beschermd en dat een nationale wettelijke regeling in die zin wordt uitgelegd
dat zij auteursrechtelijke bescherming verleent aan een dergelijke smaak’. De reden
om smaak van auteursrechtelijk bescherming uit te sluiten is dat smaak onvoldoende
nauwkeurig en objectief bepaalbaar is om object van bescherming te zijn. Als
verklaring wijst het HvJ EU op de subjectiviteit en variabiliteit van de smaakperceptie
en het ontbreken van technische middelen voor smaakbepaling. Zoals in
deze bijdrage wordt uiteengezet, is deze argumentatie niet overtuigend en zelfs
ondeugdelijk. Het bepaalbaarheidsvereiste is evenwel plausibel en navolgenswaardig,
omdat het beoogt om meer rechtszekerheid te creëren over het object van
auteursrechtelijke bescherming.
Auteursrecht, bescherming, frontpage, object, smaak
RIS
Bibtex
Bastei Lübbe: “Fundamental Rights as a defence to circumvent enforcement of Copyright protection? No!”, says CJEU. external link
Annotatie bij EHRM 28 augustus 2018 (Savva Terentyyev / Rusland) external link
Does everyone have a price? Understanding people’s attitude towards online and offline price discrimination external link
Abstract
Online stores can present a different price to each customer. Such algorithmic personalised pricing can lead to advanced forms of price discrimination based on the characteristics and behaviour of individual consumers. We conducted two consumer surveys among a representative sample of the Dutch population (N=1233 and N=1202), to analyse consumer attitudes towards a list of examples of price discrimination and dynamic pricing. A vast majority finds online price discrimination unfair and unacceptable, and thinks it should be banned. However, some pricing strategies that have been used by companies for decades are almost equally unpopular. We analyse the results to better understand why people dislike many types of price discrimination.
algoritmen, Consumentenrecht, frontpage, Price discrimination
RIS
Bibtex
The Netherlands in ‘Automating Society – Taking Stock of Automated Decision-Making in the EU’ external link
Abstract
Systems for automated decision-making or decision support (ADM) are on the rise in EU countries: Profiling job applicants based on their personal emails in Finland, allocating treatment for patients in the public health system in Italy, sorting the unemployed in Poland, automatically identifying children vulnerable to neglect in Denmark, detecting welfare fraud in the Netherlands, credit scoring systems in many EU countries – the range of applications has broadened to almost all aspects of daily life.
This begs a lot of questions: Do we need new laws? Do we need new oversight institutions? Who do we fund to develop answers to the challenges ahead? Where should we invest? How do we enable citizens – patients, employees, consumers – to deal with this?
For the report “Automating Society – Taking Stock of Automated Decision-Making in the EU”, our experts have looked at the situation at the EU level but also in 12 Member States: Belgium, Denmark, Finland, France, Germany, Italy, Netherlands Poland, Slovenia, Spain, Sweden and the UK. We assessed not only the political discussions and initiatives in these countries but also present a section “ADM in Action” for all states, listing examples of automated decision-making already in use.
This is the first time a comprehensive study has been done on the state of automated decision-making in Europe.
algorithms, algoritmes, Artificial intelligence, EU, frontpage, kunstmatige intelligentie, NGO
RIS
Bibtex
Discrimination, artificial intelligence, and algorithmic decision-making external link
Abstract
This report, written for the Anti-discrimination department of the Council of Europe, concerns discrimination caused by algorithmic decision-making and other types of artificial intelligence (AI). AI advances important goals, such as efficiency, health and economic growth but it can also have discriminatory effects, for instance when AI systems learn from biased human decisions. In the public and the private sector, organisations can take AI-driven decisions with farreaching effects for people. Public sector bodies can use AI for predictive policing for example, or for making decisions on eligibility for pension payments, housing assistance or unemployment benefits. In the private sector, AI can be used to select job applicants, and banks can use AI to decide whether to grant individual consumers
credit and set interest rates for them. Moreover, many small decisions, taken together, can have large effects. By way of illustration, AI-driven price discrimination could lead to certain groups in society consistently paying more. The most relevant legal tools to mitigate the risks of AI-driven discrimination are nondiscrimination law and data protection law. If effectively enforced, both these legal tools
could help to fight illegal discrimination. Council of Europe member States, human rights monitoring bodies, such as the European Commission against Racism and Intolerance, and Equality Bodies should aim for better enforcement of current nondiscrimination norms. But AI also opens the way for new types of unfair differentiation (some might say discrimination) that escape current laws. Most non-discrimination statutes apply only to discrimination on the basis of protected characteristics, such as skin colour. Such statutes do not apply if an AI system invents new classes, which do not correlate with
protected characteristics, to differentiate between people. Such differentiation could still be unfair, however, for instance when it reinforces social inequality. We probably need additional regulation to protect fairness and human rights in the area of AI. But regulating AI in general is not the right approach, as the use of AI systems is too varied for one set of rules. In different sectors, different values are at stake, and different problems arise. Therefore, sector-specific rules should be considered. More research and debate are needed.
Artificial intelligence, discriminatie, frontpage, kunstmatige intelligentie, Mensenrechten