Designing algorithms against corruption: a conjoint study on communicative features to encourage intentions for collective action external link

Starke, C., Kieslich, K., Reichert, M. & Köbis, N.
Journal of Information Technology & Politics, 2025

Abstract

Algorithmic tools are increasingly used to automate corruption reporting on social media platforms. Based on the use case of an existing bot, this study investigates how to design the communication of a bot to effectively and responsibly mobilize people for collective action against corruption. In a pre-registered choice-based conjoint survey (n = 1,331), we test six message design features: type of injustice, degree of injustice, anger, political partisanship, gender, and efficacy cues. Our results show that calling out cases of severe corruption increased people’s intention to engage in collective action against corruption. We find no empirical support for in-group favoritism based on political affiliation and gender. Yet, some commonly used design features can have contrasting effects on different audiences. We call for more social science research accompanying the technical development of algorithmic tools to fight corruption.

RIS

Save .RIS

Bibtex

Save .bib

Cybersecurity in the financial sector and the quantum-safe cryptography transition: in search of a precautionary approach in the EU Digital Operational Resilience Act framework

Jančiūtė, L.
International Cybersecurity Law Review, vol. 6, pp: 145-154, 2025

Abstract

An ever more digitalised financial sector is exposed to a growing number of cyberattacks. Given the criticality and interconnectedness of this sector, cyber threats here represent not only operational risks, but also systemic risks. In the long run, the emerging cyber risks include developments in quantum computing threatening widely used encryption safeguarding digital networks. Globally in the financial sector, some initiatives have already been taking place to explore the possible mitigating measures. This paper argues that for an industry-wide transition to quantum-safe cryptography the precautionary principle is relevant. In the EU, financial entities now have to be compliant with the Digital Operational Resilience Act strengthening ICT security requirements. This research traces the obligation to adopt quantum-resistant precautionary measures under its framework.

Cybersecurity, quantum technologies

RIS

Save .RIS

Bibtex

Save .bib

The concept of “research organisation” and its implications for text and data mining and AI research external link

Abstract

The concept of a “research organization” has significant implications across various domains of EU information law, including copyright, artificial intelligence (AI), and even platform regulation. Defined in the Copyright in the Digital Single Market Directive (CDSMD), this concept plays a crucial role in determining the legal obligations and rights of entities engaging in activities such as text and data mining (TDM) and AI research, or data access for research purposes. By examining how this definition interacts with legislative frameworks like the CDSMD and the AI Act, this short contribution examines its critical role in EU digital regulation of research and highlights areas of legal uncertainty.

RIS

Save .RIS

Bibtex

Save .bib

DeepSeek-paniek download

Nederlands Juristenblad (NJB), iss. : 6, num: 300, pp: 401, 2025

RIS

Save .RIS

Bibtex

Save .bib

Co-creating research at The AI, media, and democracy lab: Reflections on the role of academia in collaborations with media partners external link

Cools, H., Helberger, N. & Vreese, C.H. de
Journalism, 2025

Abstract

This commentary explores academia’s role in co-creating research with media partners, focusing on the distinct roles and challenges that each stakeholder brings to such partnerships. Starting from the perspective of the AI, Media, and Democracy Lab, and building on the Ethical, Legal, and Societal Aspects (ELSA) approach, we share key learnings from 3 years of collaborations with (media) partners. We conclude that navigating dual roles, expectations, output alignment, and a process of knowledge sharing are important requirements for academics and (media) partners to adequately co-create research and insights. We also argue that these key lessons do not always square with how academic research is organized and funded. We underscore that changes in funding structures and the way academic research is assessed can further facilitate the co-creation of research between academic research and projects in the media sector.

RIS

Save .RIS

Bibtex

Save .bib

European Copyright Society Opinion on Copyright and Generative AI external link

Dusollier, S., Kretschmer, M., Margoni, T., Mezei, P., Quintais, J. & Rognstad, O.A.
Kluwer Copyright Blog, 2025

Copyright, Generative AI

RIS

Save .RIS

Bibtex

Save .bib

Judicial Automation: Balancing Rights Protection and Capacity-Building external link

Qiao, C. & Metikoš, L.
2025

Abstract

This entry explores the global rise of judicial automation and its implications through two dominant frameworks: rights protection and capacity-building. The rights protection framework aims to safeguard individual rights against opaque judicial automation by advocating for the use of explainable and contestable AI tools in courts. In contrast, the capacity-building framework prioritises judicial efficiency and consistency by automating court proceedings. Although these frameworks offer contrasting approaches, they are not mutually exclusive. A balance needs to be struck, where judicial automation enhances judicial capacities while maintaining transparency and accountability.

individual rights, judicial automation, judicial capacity, right to explanation

RIS

Save .RIS

Bibtex

Save .bib

Copyright and Generative AI: Opinion of the European Copyright Society external link

Dusollier, S., Kretschmer, M., Margoni, T., Mezei, P., Quintais, J. & Rognstad, O.A.
2025

Copyright

RIS

Save .RIS

Bibtex

Save .bib

Research Workshop Report: “The (Evolving) Human Right to a Healthy Environment: What Impact on Intellectual Property Laws?” external link

Meyermans-Spelmans, E. & Izyumenko, E.
Human Rights Here, 2025

Human rights, Intellectual property

RIS

Save .RIS

Bibtex

Save .bib

Trust and Safety in the Age of AI – the economics and practice of the platform-based discourse apparatus external link

Abstract

In recent years social media services emerged as key infrastructures for a plethora of societal conversations around politics, values, culture, science, and more. Through their Trust and Safety practices, they are playing a central role in shaping what their users may know, may believe in, what kinds of values, truths and untruths, or opinions they are exposed to. The rapid emergence of various tools, such as AI and the likes brought further complexities to how these societal conversations are conducted online. On the one hand, platforms started to heavily rely on automated tools and algorithmic agents to identify various forms of speech, some of them flagged for further human review, others being filtered automatically. On the other hand, cheap and ubiquitous access to generative AI systems also produce a flood of new speech on social media platforms. Content moderation and filtering, as one of the largest ‘Trust and Safety’ activities, is, on the surface, the most visible, and understandable activity which could protect users from all the harms stemming from ignorant or malicious actors in the online space. But, as we argue in this paper, content moderation is much more than that. Platforms, through their AI-human content moderation stack are ordering key societal discourses. The Foucauldian understanding of society emphasizes that discourse is knowledge is power: we know what the discourse reveals to us, and we use this knowledge as power to produce the world around us, render it legible through discourse. This logic, alongside the radically shifting rules of information economics, which reduced the cost of information to zero, challenges the old institutions, rules, procedures, discourses, and subsequent knowledge and power structures. In this paper, we first explore the practical realities of content moderation based on an expert interview study with Trust and Safety professionals, and a supporting document analysis, based on data published through the DSA Transparency Database. We reconstruct these empirical insights as an analytical model – a discourse apparatus stack – in the Foucauldian framework. This helps to identify the real systemic challenges content moderation faces, but fails to address.

Artificial intelligence, automated filtering, Content moderation, Foucault, information economics, Platforms, trust

RIS

Save .RIS

Bibtex

Save .bib