Comparing the Right to an Explanation of Judicial AI by Function: Studies on the EU, Brazil, and China external link
Abstract
Courts across the world are increasingly adopting AI to automate various tasks. But, the opacity of judicial AI systems can hinder the ability of litigants to contest vital pieces of evidence and legal observations. One proposed remedy for the inscrutability of judicial AI has been the right to an explanation. This paper provides an analysis of the scope and contents of a right to an explanation of judicial AI in the EU, Brazil, and China. We argue that such a right needs to take into account that judicial AI can perform widely different functions. We provide a classification of these functions, ranging from ancillary to impactful tasks. We subsequently compare, by function, how judicial AI would need to be explained under due process standards, Data Protection Law, and AI regulation in the EU, Brazil, and China. We find that due process standards provide a broad normative basis for a derived right to an explanation. But, these standards do not sufficiently clarify the scope and content of such a right. Data Protection Law and AI regulations contain more explicitly formulated rights to an explanation that also apply to certain judicial AI systems. Nevertheless, they often exclude impactful functions of judicial AI from their scope. Within these laws there is also a lack of guidance as to what explainability substantively entails. Ultimately, this patchwork of legal frameworks suggests that the protection of litigant contestation is still incomplete.
Links
Artificial intelligence, digital justice, right to an explanation