Comparing the Right to an Explanation of Judicial AI by Function: Studies on the EU, Brazil, and China external link

Metikoš, L., Iglesias Keller, C., Qiao, C. & Helberger, N.
pp: 31, 2025

Abstract

Courts across the world are increasingly adopting AI to automate various tasks. But, the opacity of judicial AI systems can hinder the ability of litigants to contest vital pieces of evidence and legal observations. One proposed remedy for the inscrutability of judicial AI has been the right to an explanation. This paper provides an analysis of the scope and contents of a right to an explanation of judicial AI in the EU, Brazil, and China. We argue that such a right needs to take into account that judicial AI can perform widely different functions. We provide a classification of these functions, ranging from ancillary to impactful tasks. We subsequently compare, by function, how judicial AI would need to be explained under due process standards, Data Protection Law, and AI regulation in the EU, Brazil, and China. We find that due process standards provide a broad normative basis for a derived right to an explanation. But, these standards do not sufficiently clarify the scope and content of such a right. Data Protection Law and AI regulations contain more explicitly formulated rights to an explanation that also apply to certain judicial AI systems. Nevertheless, they often exclude impactful functions of judicial AI from their scope. Within these laws there is also a lack of guidance as to what explainability substantively entails. Ultimately, this patchwork of legal frameworks suggests that the protection of litigant contestation is still incomplete.

Artificial intelligence, digital justice, right to an explanation

RIS

Save .RIS

Bibtex

Save .bib

Procedural Justice and Judicial AI; Substantiating Explainability Rights with the Values of Contestation external link

Metikoš, L. & Domselaar, I. van
2025

Abstract

The advent of opaque assistive AI in courtrooms has raised concerns about the contestability of these systems, and their impact on procedural justice. The right to an explanation under the GDPR and the AI Act could address the inscrutability of judicial AI for litigants. To substantiate this right in the domain of justice, we examine utilitarian, rights-based (including dignitarian and Dworkinian approaches), and relational theories of procedural justice. These theories reveal diverse perspectives on contestation, which can help shape explainability rights in the context of judicial AI. These theories respectively highlight different values of litigant contestation: it has instrumental value in error correction, and intrinsic value in respecting litigants' dignity, either as rational autonomous agents or as socio-relational beings. These insights help us answer three central and practical questions on how the right to an explanation should be operationalized to enable litigant contestation: should explanations be general or specific, to what extent do explanations need to be faithful to the system's actual behavior or merely provide a plausible approximation, and should more interpretable systems be used, even at the cost of accuracy? These questions are not strictly legal or technical in nature, but also rely on normative considerations. The practical operationalization of explainability will therefore differ between different valuations of litigant contestation of judicial AI.

Artificial intelligence, digital justice, Transparency

RIS

Save .RIS

Bibtex

Save .bib