Procedural Justice and Judicial AI: Substantiating Explainability Rights with the Values of Contestation

Abstract

The advent of opaque assistive AI in courtrooms has raised concerns about the contestability of these systems, and their impact on procedural justice. The right to an explanation under the GDPR and the AI Act could address the inscrutability of judicial AIfor litigants. To substantiate this right in the domain of justice, we examine utilitarian, rights-based (including dignitarian and Dworkinian approaches), and relational theories of procedural justice. These theories reveal diverse perspectives on contestation, which can help shape explainability rights in the context of judicial AI. These theories respectively highlight different values of litigant contestation; it has instrumental value in error correction, and intrinsic value in respecting litigants’ dignity, either as rational autonomous agents or as socio-relational beings. These insights help us answer three central and practical questions on how the right to an explanation should be operationalized to enable litigant contestation: should explanations be general or specific, to what extent do explanations need to be faithful to the system’s internal behavior or merely provide a plausible approximation, and should more interpretable systems be used, even at the cost of accuracy? These questions are notstrictly legal or technical in nature, but also rely on normative considerations. Finally, this paper also evaluateswhat theory of procedural justice could best safeguard contestation effectively in the age of judicial AI.Thereto, itprovides the first building blocks of an AI-responsive theory of procedural justice.

RIS

Save .RIS

Bibtex

Save .bib