Respect for personal autonomy in AI regulatory framework

Authors

  • Petro Sukhorolskyi Lviv Polytechnic National University, Ukraine

DOI:

https://doi.org/10.21564/2414-990X.168.322056

Keywords:

personal autonomy, artificial intelligence, human rights, personal data protection, negative and positive freedom

Abstract

The legal regulation of artificial intelligence is one of the most pressing and debated topics at the national and international levels. The rapid development of artificial intelligence can significantly change the existing reality and leads to fundamentally new challenges for lawmaking and law enforcement, in particular in the field of human rights. The main purpose of the article is to determine whether the new European legal instruments on artificial intelligence (in particular, the European Union’s AI Act and the Council of Europe’s Framework Convention on AI) reflect these technological threats and protect the personal autonomy of individuals. To achieve this goal, the article reveals the essence of personal autonomy and its significance for human rights and the legal system, as well as identifies the directions of the real and potential impact of artificial intelligence on personal autonomy. The theoretical and methodological foundation of the study is Joseph Raz’s theory of personal autonomy which allows to identify the main problems and contradictions in the use of artificial intelligence and to shape proposals for responding to actual threats. Based on the idea of the fundamental role of personal autonomy, the article shows how the introduction of artificial intelligence, driven by the interests of specific actors, negatively affects the position, rights and capacities of individuals. In particular, the author identifies three directions of such influence: high-tech manipulation of people, distortion of their perception through myths and misconceptions, and formation of the appropriate online architecture and social norms. Based on the analysis of legal documents, two approaches to the regulation of artificial intelligence are identified. The first approach relegates personal autonomy to the periphery and suggests that problems should be solved through cooperation between government and business by using risk assessment tools. This should result in ready-made solutions that are offered to people. The second human-centred approach emphasises the protection of personal autonomy. However, detailed norms within this approach have not yet been created, and their development requires further theoretical elaborations. In this regard, the primary focus should be on preserving and improving the conditions of autonomy that are threatened by the misuse of artificial intelligence.

References

Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence. (2024, June). Retrieved from https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng

Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law. (2024). Retrieved from: https://rm.coe.int/1680afae3c

Raz, J. (1986). The morality of freedom. Oxford: Clarendon.

Dworkin, G. (1988). The theory and practice of autonomy. Cambridge University Press.

Bernal, P. (2014). Internet privacy rights: Rights to protect autonomy. Cambridge University Press.

Mill, J. S. (2003). On Liberty. Yale University Press.

Berlin, I. (1969). Four essays on liberty. Oxford University Press.

Risse, M. (2019). Human rights and artificial intelligence: An urgently needed agenda. Human Rights Quarterly, 41(1), 1-16.

Rodrigues, R. (2020). Legal and human rights issues of AI: Gaps, challenges and vulnerabilities. Journal of Responsible Technology, 4, 100005.

Shaelou, S. L., & Razmetaeva, Y. (2023, December). Challenges to Fundamental Human Rights in the age of Artificial Intelligence Systems: shaping the digital legal order while upholding Rule of Law principles and European values. In ERA Forum (Vol. 24, No. 4, pp. 567-587). Berlin/Heidelberg: Springer Berlin Heidelberg.

Prunkl, C. (2024). Human autonomy at risk? An analysis of the challenges from AI. Minds and Machines, 34(3), 26.

Mik, E. (2016). The erosion of autonomy in online consumer transactions. Law, Innovation and Technology, 8(1), 1-38.

Solsman, J. E. (2018). YouTube’s AI is the puppet master over most of what you watch. CNET. Retrieved from https://www.cnet.com/news/youtube-ces-2018-neal-mohan/

Sukhorolskyi, P. (2021). Right to be forgotten and freedom of expression: Peculiarities of balancing. In Right to be Forgotten (pp. 146-162). Kharkiv: ECUS.

Google v. Agencia Española de Protectión de Datos (AEPD) and Mario Costeja Gonzalez, 13 May 2014, European Court of Justice, Case C-131/12.

Melkevik, Å., & Melkevik, B. (2021). Two concepts of dignity: On the decay of agency in law. In The Inherence of Human Dignity. Vol. 1. Foundations of Human Dignity (pp. 133-147). London: Anthem Press.

Orwell, G. (1949). Nineteen Eighty-Four. London: Secker & Warburg.

Huxley, A. (1932). Brave New World. London: Chatto & Windus.

Carroll, D. R. (2021). Cambridge Analytica. In Research Handbook on Political Propaganda (pp. 41-50). Edward Elgar Publishing.

Goury-Laffont, V. (2024, December 21). Report ties Romanian liberals to TikTok campaign that fueled pro-Russia candidate. Politico. Retrieved from https://www.politico.eu/article/investigation-ties-romanian-liberals-tiktok-campaign-pro-russia-candidate-calin-georgescu/

Waldo, J., & Boussard, S. (2025). GPTs and Hallucination. Communications of the ACM, 68(1), 40-45.

Grimm, C. (2021). The danger of anthropomorphic language in robotic AI systems. The Brookings Institution. Retrieved from https://www.brookings.edu/articles/the-danger-of-anthropomorphic-language-in-robotic-ai-systems/

Glickman, M., & Sharot, T. (2024). How human–AI feedback loops alter human perceptual, emotional and social judgements. Nature Human Behaviour, 1-15.

Lessig, L. (1999). Code and Other Laws of Cyberspace. New York: Basic Books.

Mantelero, A. (2018). AI and Big Data: A blueprint for a human rights, social and ethical impact assessment. Computer Law & Security Review, 34(4), 754-772.

Chakravorti, B. (2024, January). What if regulation makes the AI monopoly worse? Foreign Policy. Retrieved from https://foreignpolicy.com/2024/01/25/ai-regulation-monopoly-chatgpt/

Shapira, P. (2024). Delving into “delve”. Retrieved from https://pshapira.net/2024/03/31/delving-into-delve/

Sukhorolskyi, P. (2024). Fundamental values of data protection law: Autonomy vs the Megamachine. In European Fundamental Values in the Digital Era. Kharkiv: Pravo.

High-level expert group on artificial intelligence. (2019). Ethics guidelines for trustworthy AI. European Commission.

Dilhac, M. A., Abrassart, C., & Voarino, N. (2018). Report of the Montréal Declaration for a responsible development of artificial intelligence.

OECD. (2019, May). Recommendation of the Council on Artificial Intelligence. Retrieved from https://one.oecd.org/document/C/MIN(2019)3/FINAL/en/pdf

Uuk, R. (2022, January). Manipulation and the AI Act. The Future of Life Institute

Koffeman, N. R. (2010). (The right to) personal autonomy in the case law of the European Court of Human Rights. Leiden: Leiden University. Retrieved from https://hdl.handle.net/1887/15890

Downloads

Published

2025-04-11

How to Cite

Sukhorolskyi, P. (2025). Respect for personal autonomy in AI regulatory framework. Problems of Legality, (168), 6–25. https://doi.org/10.21564/2414-990X.168.322056

Issue

Section

Articles