Library
|
Your profile |
Administrative and municipal law
Reference:
Atabekov, A.R. (2023). Ensuring Autonomy of Decision-Making by Artificial Intelligence for the Purposes of Legal Public Relations. Administrative and municipal law, 1, 29–36. https://doi.org/10.7256/2454-0595.2023.1.39893
Ensuring Autonomy of Decision-Making by Artificial Intelligence for the Purposes of Legal Public Relations.
DOI: 10.7256/2454-0595.2023.1.39893EDN: GYFCBGReceived: 27-02-2023Published: 06-03-2023Abstract: Within the framework of this article, a comparative analysis of existing approaches is carried out to determine the basic conditions for ensuring the autonomy of AI in the context of public legal relations of foreign countries and Russia. As part of the comparative analysis, the basic problems in the field of AI decision-making transparency in world practice, practical situations of integrating non-transparent AI into the sphere of public legal relations in foreign countries, as well as possible compensatory legal measures that ensure the safe integration of AI into the sphere of public administration in Russia are being investigated. The subject of the study is the formalization of the actions of artificial intelligence as a representative of a government body. The object of the research is normative documents, recommendations and other documents regulating the implementation of AI autonomy for the purposes of public legal relations in Russia and foreign countries, judicial practice, academic publications on the issues under study. The research methodology integrates a complex of modern philosophical, general scientific, special scientific methods of cognition, including dialectical, systemic, structural-functional, hermeneutical, comparative legal, formal legal (dogmatic), etc. Within the framework of this study, special emphasis is placed on the implementation of a comparative legal study of the phenomenon of AI autonomy, which implements public functions based on the experience of various states. The measures proposed as a result of the study can be applied in the legislative and law enforcement practice of the relevant authorities implementing the integration of artificial intelligence into the sphere of public relations in Russia. Keywords: artificial intelligence, electronic person, comparative legal research of AI, machine learning, counterfactual analysis, safety AI, public law, administrative law, information law, law enforcement practiceThis article is automatically translated. Regulation of legal relations arising from the phenomenon of artificial intelligence and transparency of decision-making for the purposes of public legal relations is strategically important both for Russia and for foreign countries. At the same time, the issue of AI positioning in the sphere of public legal relations occupies a separate role, the issue remains systemic and complex both at the level of theoretical research and practical solutions in the field of clarity and transparency of decisions made by AI. At the level of the German Ministry of Justice [1], the EU Commission [2], individual US states [3], the issue of the need to resolve the "black box" is already being systematically considered. Among representatives of foreign and domestic scientific schools, questions also systematically arise both in terms of the technical feasibility of opening this "box" [4], as well as the fact that the illusion of AI accountability for humans raises the question of terminological certainty of the concepts of "explainability", "comprehensibility", "meaningfulness", etc.[5-8]. When considering the technical and legal aspect of the functioning of AI for the purposes of public authority, the first aspect is to ensure the transparency of AI, since the dynamic structure of the processed data is used, which is expressed by the absence of a direct relationship between input data and "output" decisions (outcomes) [9]. The reason lies in the fact that AI (especially those based on machine learning) use a large number of ways to analyze and achieve goals with different algorithmic principles and approaches to their interpretation[10]. For example, the issue of interpretation of a typewritten text, a logarithmic formula for assessing damage (provided by one or another subordinate institution), the evidence base of the participant in the administrative process, which may include oral explanations using non-standard speech turns, etc., and AI data processing. In addition, the issue of data processing speed and constant data processing forms a conditionally "living organism" having a constant dynamic data structure, potentially updated with each interaction with the user[11-12]. At the same time, at the level of doctrinal perception of AI for the purposes of public legal relations, it should also be noted that AI and its basic component element - machine learning - work according to the sample of cases that were previously developed by the relevant official. It should be understood that the general codification of administrative law and process allows, for the most part, to put the activities of employees of relevant ministries and departments on "algorithmic rails"; at the same time, it should be noted that within each case there are both common features (inherent in a particular class of cases provided for by the codes) and distinctive features forming a subset of cases with generalizable properties. It is necessary to understand that the issue of generalization of administrative cases, as well as their personalization for AI purposes, generates a number of problems, such as problems of the selection effect and formed bias. Analyzing the problem of the selection effect, we see that an algorithm running on one database may cease to effectively implement its public functionality for the purposes of another department or generate data in the same segment, but with a random sample of the source data [13]. The issue of data bias is determined by the extent to which the applicable norm is used in administrative practice and whether there is a representative sample of facts formed, which is also systematically highlighted by both domestic and foreign scientists[14-16]. The basic solution to these problems can be the following three tools (individually or in combination):
It should be understood that this bundle can also preserve the risks of unethical decision-making, unreliability of the sample when making an AI decision (human factor), as well as the potential impact of AI on the basic rights provided for by the Constitution (Part 2 of Article 19). However, we understand that these actions will later make human biases or mistakes more noticeable. In addition, it should be borne in mind that AI technology can be used not only for the benefit of the state, but also in opposition to the current measures provided for by legislation. The whole complex of measures laid down when making an autonomous AI decision can be empirically identified, and, as a result, strategic countermeasures can be formulated provided for by the implementation of the principles of legal certainty [19]. Among these, it is possible to highlight the issues of creating information noise of the AI databases used (especially those located behind the contour of the formation of profile authorities), as well as the formation of behavioral adaptation when a controlled subject, who is not at risk of being included in the sample, is less "law-abiding". These doctrinal studies of the integration of AI into the sphere of public legal relations are reflected in the practical public activities of the authorities of various countries: The Dutch authorities, as part of the control of fraud in the field of social security, used the SyRi system [20], which, in fact, used a large array of data in its work, but was not sufficiently transparent in its data processing and the empirical results provided. As a result, the specified algorithm was banned for use by the authority due to the inconsistency of its principles with article 8 of the ECHR (European Convention on Human Rights) [21]. The Ministry of Labor and Social Policy of Poland used a highly specialized AI, which was supposed to categorize potential recipients of unemployment benefits [22]. At the same time, the binary processing of the AI solution passed the appropriate verification of the department's employees, however, due to negligence, in almost 100% of cases, the position of the AI was not disputed by the employee. As a result of the analysis of AI actions for transparency, the Constitutional Court of Poland recognized this product as unconstitutional [23]. The Unemployment Insurance Agency of the State of Michigan (USA) used a similar SyRi tool for the purpose of detecting fraud cases related to receiving unemployment benefits. At the same time, the specified AI carried out a completely autonomous decision-making regarding the recovery of benefits, without the possibility of appealing the decision within the agency [24], which subsequently entailed a trial and appropriate penalties to the AI developer[25]. Based on the above, in order to form a balanced approach to the use of AI for the purposes of public legal relations, the following is proposed:
References
1. Zwischenbericht der Arbeitsgruppe “Digitaler Neustart” zur Frühjahrskonferenz der Justizministerinnen und Justizminister am 6. und 7. Juni 2018 in Eisenach: [website]. — URL: www.justiz.nrw.de/JM/schwerpunkte/digitaler_neustart/zt_fortsetzung_arbeitsgruppe_teil_2/2018-04-23-Zwischenbericht-F-Jumiko-2018%2D%2D-final.pdf (accessed: 21.02.2023).
2. Proposal for a Regulation on promoting fairness and transparency for business users of online intermediation services (COM(2018) 238 final / 2018/0112 (COD)): [website]. — URL: https://eur-lex.europa.eu/procedure/EN/2018_112 (accessed: 21.02.2023). 3. The initial proposal (Int. 1696–2017) would have added the text cited above to Section 23-502 of the Administrative Code of the City of New York. However, the law that was finally passed only established a task force which is designated to study how city agencies currently use algorithms: [website]. — URL: legistar.council.nyc.gov/LegislationDetail.aspx? ID¼3137815&GUID¼437A6A6D-62E1-47E2-9C42-461253F9C6D0 (accessed: 21.02.2023). 4. Burrell J. How the machine ‘thinks’: Understanding opacity in machine learning algorithms //Big data & society. – 2016. – Ò. 3. – ¹. 1. – P. 2053951715622512. 5. Ananny M., Crawford K. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability //new media & society. – 2018. – Ò. 20. – ¹. 3. – P. 973-989. 6. Fenster M. The transparency fix: Secrets, leaks, and uncontrollable government information. – Stanford University Press, 2017. 7. Grey C., Costas J. Secrecy at work: The hidden architecture of organizational life. – Stanford University Press, 2016. 8. Martynov A. V., Bundin M. V. On the Legal Principles of Exploiting Artificial Intelligence in Executing Control and Supervisory Activities by Executive // Journal of Russian Law. – 2020. – ¹. 10. – P. 59-75. 9. Leese M. The new profiling: Algorithms, black boxes, and the failure of anti-discriminatory safeguards in the European Union //Security Dialogue. – 2014. – Ò. 45. – ¹. 5. – P. 494-511. 10. Bundesanstalt für Finanzdienstleistungsaufsicht (2018) Big Data trifft auf künstliche Intelligenz. Herausforderungen und Implikationen für Aufsicht und Regulierung von Finanzdienstleistungen: [website]. — URL: www.bafin.de/SharedDocs/Downloads/DE/dl_bdai_studie.html (accessed: 21.02.2023). 11. Tutt A. An FDA for Algorithms’(2017) //Administrative law review. – Ò. 69. – P. 83. 12. IBM. Continuous relevancy training: [website]. — URL: console.bluemix.net/docs/services/discovery/continu ous-training.html#crt (accessed: 21.02.2023). 13. Hermstrüwer Y. Artificial intelligence and administrative decisions under uncertainty //Regulating Artificial Intelligence. – 2020. – Ñ. 199-223. 14. Lehr D., Ohm P. Playing with the data: what legal scholars should learn about machine learning //UCDL Rev. – 2017. – Ò. 51. – P. 653. 15. Vorobyova I.B. Ethical aspects of the use of artificial intelligence systems in crime investigation // Bulletin of the Saratov State Law Academy. – 2022. – ¹. 4 (147). – P. 162-172. 16. Kharitonova Yu. S., Savina V. S., Pagnini F. Predvzyatost’ algoritmov iskusstvennogo intellekta: voprosy etiki I prava [Artificial Intelligence’s Algorithmic Bias: Ethical and Legal Issues]. Vestnik Permskogo universiteta. Juridicheskie nauki – Perm University Herald. Juridical Sciences. 2021. Issue 53. Pp. 488–515. (In Russ.).DOI: 10.17072/1995-4190-2021-53-488-515 17. Cowgill B., Tucker C. Algorithmic bias: A counterfactual perspective //NSF Trustworthy Algorithms. – 2017. 18. Lewis D. Counterfactuals. Harvard University Press. Cambridge, MA. – 1973. 19. Constitutional and legal protection of entrepreneurship: current aspects (based on the decisions of the Constitutional Court of the Russian Federation in 2018-2020) (approved by the decision of the Constitutional Court of the Russian Federation 17.12.2020) [website]. — URL: http://www.consultant.ru/document/cons_doc_LAW_374913/ (accessed: 21.02.2023) 20. SyRI legislation in breach of European Convention on Human Rights: [website]. — URL: https://www.rechtspraak.nl/Organisatie-en-contact/Organisatie/Rechtbanken/Rechtbank-Den-Haag/Nieuws/Paginas/SyRI-legislation-in-breach-of-European-Convention-on-Human-Rights.aspx (accessed: 21.02.2023) 21. District Court of the Hague, 6 March 2020, ECLI:NL:RBDHA:2020:865: [website]. — URL: uitspraken.rechtspraak.nl/inziendocument?id=ECLI:NL:RBDHA:2020:1878 (accessed: 21.02.2023) 22. MAKING O. F. A. D. Profiling the unemployed in poland: social and political implications. [website]. — https://panoptykon.org/sites/default/files/leadimage-biblioteka/panoptykon_profiling_report_final.pdf (accessed: 21.02.2023) 23. Koniec profilowania bezrobotnych: [website]. — URL:https://www.prawo.pl/kadry/bezrobotni-nie-beda-profilowani-utrudnialo-to-ich-aktywizacje,394701.html (accessed: 21.02.2023) 24. Michigan’s MiDAS Unemployment System: Algorithm Alchemy Created Lead, Not Gold: [website]. — URL: https://spectrum.ieee.org/michigans-midas-unemployment-system-algorithm-alchemy-that-created-lead-not-gold#toggle-gdpr (accessed: 21.02.2023) 25. Cahoo v. SAS Analytics Inc. Nos. 18-1295/1296: [website]. — URL:https://casetext.com/case/cahoo-v-sas-analytics-inc (accessed:21.02.2023)
Peer Review
Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
|