Artificial Intelligence for Judicial Decision-making: Some Potential Risks

Authors

  • Yulia Razmetaeva Yaroslav Mudryi National Law University, Ukraine

DOI:

https://doi.org/10.21564/2414-990X.166.311749

Keywords:

artificial intelligence, principles of law, rule of law, justice, court decisions, fairness

Abstract

The article explores the issue of implementing artificial intelligence in judicial decision-making, accentuating potential risks and challenges. It highlights the need to consider justice, fairness and the rule of law when applying AI, and provides arguments for a reasonable and limited algorithmization. The article focuses on the problems of algorithmizing complex judicial processes, particularly regarding the selection of legal principles and AI’s potential negative impact on the individualized nature of justice. Among the risks, the tendency of AI towards rationalization and standardization of decisions, its limited ability to interpret human characteristics and case circumstances, and the substitution of legal certainty with algorithmic predictability are emphasized. The article also discusses the difficulties related to the understanding and interpretation of legal texts by algorithms, noting that AI is incapable of thinking and making moral judgment. Special attention is given to the issue of legal reasoning: the article argues that court decisions must not only be justified but also convincing to society, which is impossible to achieve with AI due to its incapability to comprehend discourse and case context. The article concludes that despite technological advances, the complete replacement of human judgment with AI carries risks and may lead to a distortion of the very concept of justice and its devaluation.

References

Branting, L. K., et al. (2021). Scalable and explainable legal prediction. Artificial Intelligence and Law, 29, 213-238. https://doi.org/10.1007/s10506-020-09273-1.

Varona, D., Suárez, J. L. (2022). Discrimination, Bias, Fairness, and Trustworthy AI. Applied Sciences, 12(12), 5826. https://doi.org/10.3390/app12125826.

De Oliveira, L. F., et al. (2022). Path and future of artificial intelligence in the field of justice: a systematic literature review and a research agenda. SN Social Sciences, 2, 180. https://doi.org/10.1007/s43545-022-00482-w.

Wachter, S., B. Mittelstadt, B., Russell, C. (2021). Why fairness cannot be automated: Bridging the gap between EU non-discrimination law and AI. Computer Law & Security Review, 41, 105567. https://doi.org/10.1016/j.clsr.2021.105567.

Cossette-Lefebvre, H., Maclure, J. (2023). AI’s fairness problem: understanding wrongful discrimination in the context of automated decision-making. AI and Ethics, 3, 1255-1269. https://doi.org/10.1007/s43681-022-00233-w.

Strümke, I., Slavkovik, M., Madai, V. I. (2023). The social dilemma in artificial intelligence development and why we have to solve it. AI and Ethics, 2, 655-665.

Bathaee, Y. (2018) The Artificial Intelligence Black Box and the Failure of Intent and Causation. Harvard Journal of Law & Technology, 31(2), 889-938.

Sсhaeffer, R., Khona, M., Fiete, I. R. (2022). No Free Lunch from Deep Learning in Neuroscience: A Case Study through Models of the Entorhinal-Hippocampal Circuit. Advances in Neural Information Processing Systems, 35, 16052-16067. https://doi.org/10.1101/2022.08.07.503109.

Bowers, J. S., et al. (2023). On the importance of severely testing deep learning models of cognition. Cognitive Systems Research, 82, 101158, https://doi.org/10.1016/j.cogsys.2023.101158.

Adams, J. (2023). Defending explicability as a principle for the ethics of artificial intelligence in medicine. Medicine, Health Care and Philosophy, 6(2). https://doi.org/10.1007/s11019-023-10175-7.

Vale, D., El-Sharif, A., Ali, M. (2022). Explainable artificial intelligence (XAI) post-hoc explainability methods: risks and limitations in non-discrimination law. AI Ethics, 2, 815-826. https://doi.org/10.1007/s43681-022-00142-y.

Longo, L., Goebel, R., Lecue, F., Kieseberg, P., Holzinger, A. (2020). Explainable Artificial Intelligence: Concepts, Applications, Research Challenges and Visions, in Holzinger, A., Kieseberg, P., Tjoa, A., Weippl, E. (eds.), Machine Learning and Knowledge Extraction. CD-MAKE 2020. Lecture Notes in Computer Science, 12279, Cham. https://doi.org/10.1007/978-3-030-57321-8_1.

Shvydka v. Ukraine (Application No. 17888/12), European Court of Human Rights, 30 October 2014.

Lenis v. Greece (Application No. 47833/20), European Court of Human Rights, 30 August 2023.

Merlhiot, G., Mermillod, M., Le Pennec, J.-L., Dutheil, F., Mondillon, L. (2018). Influence of uncertainty on framed decision-making with moral dilemma. PLoS ONE, 13(5), e0197923. https://doi.org/10.1371/journal.pone.0197923.

SCHUFA Holding and Others (Scoring), Case C-634/21, ECLI:EU:C:2023:220.

McKay, C. (2019). Predicting risk in criminal procedure: actuarial tools, algorithms, AI and judicial decision-making. Current Issues in Criminal Justice, 32(1), 22-39. https://doi.org/10.1080/10345329.2019.1658694.

Yassine, S., Esghir, S., Ibrihich, O. (2023). Using Artificial Intelligence Tools in the Judicial Domain and the Evaluation of their Impact on the Prediction of Judgments. Procedia Computer Science, 220, 1021-1026. https://doi.org/10.1016/j.procs.2023.03.142.

Zahir, J. (2023). Prediction of court decision from a rabic documents using deep learning. Expert Systems, 40(6), e13236. https://doi.org/10.1111/exsy.13236.

Reiling, А. D. (Dory) (2020). Courts and Artificial Intelligence. International Journal for Court Administration, 11(2) (8). https://doi.org/10.36745/ijca.343.

Published

2024-12-02

How to Cite

Razmetaeva, Y. (2024). Artificial Intelligence for Judicial Decision-making: Some Potential Risks. Problems of Legality, (165), 177–191. https://doi.org/10.21564/2414-990X.166.311749

Issue

Section

Articles