Judicial Decision-Making and Explainable AI (XAI) – Insights from the Japanese Judicial System

Yachiko Yamada

Abstract


The recent development of artificial intelligence (AI) in information technology (IT) is remarkable. These developments have led to claims that AI can be used in courts to replace judges. In the article, the author addresses a matrix of these issues using the concept of explainable AI (XAI). The article examines how regulation can ensure that AI is ethical, and how this ethicality is closely related to (XAI). It concludes that, in the current context, the contribution of AI to the decision-making process is limited by the lack of sufficient explainability and interpretability of AI, although these aspects are adequately addressed and discussed. In addition, it is crucial to consider the impact of AI’s contribution on the legal authority that forms the foundation of the justice system, and a possible approach is suggested to consider conducting an experimental study as AI arbitration.


Keywords


explainable AI; XAI; artificial intelligence; courts; judges; decision-making process; judicial decision-making

Full Text:

PDF

References


LITERATURE

Darbyshire P., English Legal System, London 2020.

Deeks A., The Judicial Demand for Explainable Artificial Intelligence, “Columbia Law Review” 2021, vol. 119(3).

Kureha M., Kukita M., AI and Science Research (AI to Kagakukenkyuu), [in:] Artificial Intelligence Law and Society (Jinko Chino to Ningen Syakai), eds. S. Inaba et al., Tokyo 2020.

Nishimura T., The Possibility of a Vending Machine for Judgment (Hanketsu Jidouhanbaiki no Kanousei), [in:] Artificial Intelligence Law and Society (AI de Kawaru Hou to Syakai), ed. M. Usami, Tokyo 2020.

Oda H., Japanese Law, Oxford 2009, DOI: https://doi.org/10.1093/acprof:oso/9780199232185.001.1.

Ohtsubo N. et al., XAI: What Did You Think of Artificial Intelligence Then? (XAI – Sonotoki Jinkouchinou ha Dou Kangaetanoka), Tokyo 2021.

Rothman D., Hands-On Explainable AI (XAI) with Python: Interpret, Visualize, Explain, and Integrate Reliable AI for Fair, Secure, and Trustworthy AI Apps, Birmingham 2023.

Sato I., The Communication between Technology and the Laws (Tekunologii to Hou no Taiwa), [in:] Transformation of the Laws under Society with Artificial Intelligence (AI to Syakai to Hou – Paratdaimushihuto ha Okiruka), eds. J. Shishido et al., Tokyo 2020.

Simon C., Deep Learning and XAI Techniques for Anomaly Detection: Integrate the Theory and Practice of Deep Anomaly Explainability, Birmingham 2023.

Ward J., Black Box Artificial Intelligence and the Rule of Law, “Law & Contemporary Problems” 2021, vol. 84(3).

Watanabe T., Technological Innovation and Humans – Acceptance of AI (Gijyutsu Kakushin to Ningen – AI no Juyou), [in:] Artificial Intelligence Law and Society (Jinko Chino to Ningen Syakai), eds. S. Inaba et al., Tokyo 2020.

ONLINE SOURCES

Courts in Japan, https://www.courts.go.jp/app/hanrei_jp/search1 (access: 10.11.2023). [in Japanese]

Independent High-Level Expert Group on Artificial Intelligence, Ethics Guidelines for Trustworthy AI, 8.4.2019, https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai (access: 10.11.2023).

Research Group on the Introduction of Information Technology in Judicial Procedures, Promoting the Use of IT in Judicial Proceedings (Future Investment Strategy 2018), Cabinet Decision of 15 June 2018), https://www.kantei.go.jp/jp/singi/keizaisaisei/saiban/index.html (access: 10.11.2023). [in Japanese]




DOI: http://dx.doi.org/10.17951/sil.2023.32.4.157-173
Date of publication: 2023-12-22 22:04:35
Date of submission: 2023-09-07 09:15:08


Statistics


Total abstract view - 517
Downloads (from 2020-06-17) - PDF - 0

Indicators



Refbacks

  • There are currently no refbacks.


Copyright (c) 2023 Yachiko Yamada

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.