Please visit this page for more information on the project.
Please visit this page for more information on the project.
Can we design news recommenders to nudge users towards diverse consumption of topics and perspectives? The growing role of news recommenders raises the question of how news diversity can be safeguarded in a digital news landscape. Many existing studies look at either the supply diversity of recommendations, or the effects of (decreased) exposure diversity on e.g. polarization and filter bubbles. Research on how users choose from the available supply is lacking, making it difficult to understand the relation between algorithm design and possible adverse effects on citizens.
We directly address the question of how news recommender algorithms can be designed to optimize exposure diversity. This innovative and interdisciplinary project builds on our extensive expertise on news diversity, news consumption behavior, text analysis and recommender design by providing: (WP1) a normative framework for assessing diversity; (WP2) a language model to automatically measure diversity; (WP3) a model of news consumption choices based on supply, presentation, and individual characteristics; and (WP4) a concrete prototype implementation of a recommender algorithm that optimizes exposure diversity, which will be externally validated in a unique field experiment with our media partners.
The project will bridge the gap between differing understandings of news diversity in computer science, communication science, and media law. This will increase our understanding of contemporary news behavior, yield new language models for identifying topics and perspectives, and offer concrete considerations for designing recommenders that optimize exposure diversity. Together with media companies and regulators we turn these scientific insights into concrete recommendations.
Political microtargeting, and the conditions under which the use of AI and data analytics can contribute to, or threaten digital democracy are questions of central academic, societal and political importance. Central challenges to digital democracy are the lack of (1) a theoretical assessment framework, (2) the need for new methods to generate empirical insights into existing practices and their effects on users and society and 3) the lack of concrete suggestions of how to move the discussion from the current, very abstract level to discussing concrete regulatory and policy solutions.
Our research will tackle all three challenges through a unique synthesis of four sets of expertise (political philosophy, law, political communication and computer science) and a mix of sophisticated research methods (including experiments, a survey, digital tools and data analytics) to study political microtargeting on Facebook as a case study. Using democratic theory, we will produce a concrete set of legitimacy conditions that need to be fulfilled for political microtargeting to contribute, rather than harm digital democracy.
The insights from the normative analysis will be complemented, and tested by empirical research into actual practices and effects on voters. Using innovative digital tools, we will unlock the ‘black box’ and make the extent and dynamics of political microtargeting visible. Together, these insights will feed into the legal analysis to develop much-needed legal and policy solutions for governing political microtargeting.
The Political Microtargeting Project is a collaboration between the Amsterdam School of Communication Research, the Institute for Information Law and the Amsterdam School for Cultural Analysis. The research will be carried out in close cooperation with societal stakeholders (including the Ministries for Justice and BZK) and non-academic partners, such as AlgorithmWatch and the UK NGO ‘Who Targets Me‘.
The cultural and creative sectors and industries (CCSI) are essential for the well-being and competitiveness of Europeans, but the way they are structured, they are often invisible in statistics. Our data observatory aims to be a permanent and reliable data source for socio-economic and socio-legal research and evidence based policy making.
Because most of the European cultural businesses fall into the microenterprise category, their simplified tax returns, financial statements, and statistical reporting provides too little hard data about the way they work, create value, innovate, or apply law.
While official data is hard to find about the CCSI, legally open and digital data is abundant but scattered. Our data observatory aims to be a permanent point of registration and collection of existing big data sources. We are accessing governmental and scientific data sources, which are legally open for re-use, but where data needs to be reprocessed for new scientific, policy or business uses. We also aim to achieve synergies by pooling research capacities and data. We particularly focus on economic, intellectual property and metadata type data.
Data observatories are permanent registration points for information. The automated data observatory concept of IViR and Reprex, an innovative Dutch start-up, is inspired by open-source scientific software and open data design. It automates data collection, documentation, metadata creation, and data integration procedures, and aims to achieve better data quality, and quicker ecosystem growth than existing EU, OECD, or UNESCO observatories.
We follow the concept of open collaboration, which allows the cooperation with IViR on the level of research institutes, universities as well as cultural organizations, microenterprises and even citizen scientists. We encourage individual researchers and organizations to help curating and validating or data, use or data, and bring in more data for a fuller picture.
IViR and Reprex strive to develop a model for transparent, reproducible socio-economic and socio-legal research that encourages data sharing and reuse with its collaborative governance. The CCSI Data Observatory is based on the Digital Music Observatory prototype, which was validated in the Yes!Delft AI+Blockchain Validation Lab and the JUMP European Music Market Accelerator, and our aim is to provide a 21st century path for observatories that fully comply with the best practices of open science and open policy analysis.
IViR and Reprex aim first to use the observatory within the RECREO consortium. Please contact Daniel Antal, research associate at d.antal [at] uva.nl for further information.
In the European Strategy for Data, the European Commission highlighted the EU’s ambition to acquire a leading role in the data economy. At the same time, the Commission conceded that the EU would have to increase its pools of quality data available for use and re-use. As this need for enhanced data quality and interoperability is particularly strong in the creative industries, this project examines avenues for the improvement of EU copyright data. Without better data, unprecedented opportunities for monetising the wide variety of creative content in EU Member States and making this content available for new technologies, such as artificial intelligence systems, will most probably be lost. The problem has a worldwide dimension. While the US have already taken steps to provide an integrated data space for music, the EU is facing major obstacles not only in the field of music but also in other creative industry sectors.
The research project is a collaboration of, and jointly funded by, IViR, the Centre for Information and Innovation Law (University of Copenhagen), CREATe (University of Glasgow), Erasmus University Rotterdam and KU Leuven.
On Thursday, 6 May 2021, an online workshop took place, based on this discussion paper:
Copyright Data Improvement in the EU – Towards Better Visibility of European Content and Broader Licensing Opportunities in the Light of New Technologies
|Get together and welcome|
|Copyright Data – Status Quo and Improvement Initiatives in the EU and the US
|Need for Improvement – Content Recommender Systems
|Need for Improvement – AI System Training
|Incentives for New Initiatives – Costs and Potential New Trade-offs in the Data Economy
Due to the unprecedented spread of illegal and harmful content online, EU law is changing. New rules enhance hosting platforms’ obligations to police content and censor speech, for which they increasingly rely on algorithms. João’sproject examines the responsibility of platforms in this context from a fundamental rights perspective.
In light of the unprecedented spread of illegal and harmful content online, the EU and Member States have in recent years enacted legislation enhancing the responsibility of platforms and pushing them towards content moderation. These rules are problematic because they enlist private platforms to police content and censor speech without providing adequate fundamental rights safeguards. The problem is amplified because to cope with the massive amounts of content hosted, moderation increasingly relies on Artificial Intelligence (AI) systems.
In parallel, the EU is ramping up efforts to regulate the development and use of AI systems. However, at EU level, there is little policy or academic discussion on how the regulation of AI affects content moderation and vice-versa. This project focuses on this underexplored intersection, asking the question: How should we understand, theorize, and evaluate the responsibility of hosting platforms in EU law for algorithmic content moderation, while safeguarding freedom of expression and due process?
João’s project answers this question by combining doctrinal legal research, empirical methods, and normative evaluation. First, the research maps and assesses EU law and policies on platform’s responsibility for algorithmic moderation of illegal content, including three sectoral case-studies: terrorist content, hate speech, and copyright infringement. Second, the empirical research consists of qualitative content analysis of platforms’ TOS and Community Guidelines. Finally, the project evaluates the responsibility implications of algorithmic moderation from a fundamental rights perspective and offers recommendations for adequate safeguards.
While modern technological developments have enabled the exponential growth of the ‘data society’, academia struggles to obtain research data from the companies managing our data infrastructures. Jef’s project will critically assess the scientific, legal and normative merits and challenges of using transparency rights as an innovative method for obtaining valuable research data.
With a number of upcoming EU legislative initiatives on transparency, this project could not come at a better time. Indeed, academia is suffering from a great paradox in the digital society. The so-called ‘datafication of everything’ results in unprecedented potential and urgency for independent academic inquiry. Yet, academics are increasingly confronted with unwarranted obstacles in accessing the data required to pursue their role as critical knowledge-institution and public watchdog. An important reason behind this, is the increasing centralisation and privatisation of data infrastructures underlying society. The corporations managing our data-infrastructures have strong (legal, technical and economic) disincentives to share the vast amounts of data they control with academia, effectively reinforcing the ‘black box society’ (Pasquale, 2014). As a result, there is an important power asymmetry over access to data and who effectively defines research agendas. This trend has only worsened after recent ‘data scandals’, especially in the wake of Cambridge Analytica. Calls for more transparency from academia and civil society have been largely unsuccessful and efforts to scrutinise data practices in other ways often run into a range of hurdles. At the same time, new and proposed policy documents on algorithmic transparency and open science/data remain abstract.
Against this backdrop, the project will explore how the law – ie disclosure/transparency obligations and access rights – can be used by the academic research community as a tool for breaking through these asymmetries. This may appear counter-intuitive as academics are often confronted with legal arguments to prevent access to data, often for valid reasons such as privacy protection or intellectual property. Yet, these arguments are often abused as blanket restrictions, preventing more balanced solutions. The project will unpack the many issues underlying this tension and evaluate how the respective information asymmetry between academia and big tech can be resolved in light of the European fundamental rights framework.