Library
|
Your profile |
Law and Politics
Reference:
Atabekov A.R.
Creation and application of artificial intelligence for public purposes: a comparative legal analysis.
// Law and Politics.
2023. ¹ 6.
P. 59-68.
DOI: 10.7256/2454-0706.2023.6.40848 EDN: IIGGCY URL: https://en.nbpublish.com/library_read_article.php?id=40848
Creation and application of artificial intelligence for public purposes: a comparative legal analysis.
DOI: 10.7256/2454-0706.2023.6.40848EDN: IIGGCYReceived: 27-05-2023Published: 06-06-2023Abstract: The article focuses on a comparative analysis of existing approaches regarding the compliance of the artificial intelligence (AI) designed for public purposes in foreign countries and Russia with the current national strategies and regulatory approaches. As part of the study, the research identified the basic problems in the field of transparency in the decision-making of artificial intelligence, specified challenges to implicit regulatory nature for AI in the public sphere arising from the technical design of AI systems designed by developers; suggested theoretical and practical situations of using artificial intelligence that does not comply with the principles of designing AI on the basis of fundamental legal norms; and outlined possible compensatory legal measures that ensure the safe integration of artificial intelligence into the Russian public sphere. The subject of the study covers the issues of the influence of the design of artificial intelligence on its subsequent application in the public sphere. The object of the study focuses on the normative documents, recommendations and other documents regulating the issues of artificial intelligence preoccupation for public legal relations in Russia and foreign countries, judicial practice, academic publications and analytical reports on the issues under study. The research methodology integrates a complex of modern philosophical, general scientific, special scientific methods of cognition, including dialectical, systemic, structural-functional, hermeneutical, comparative legal, formal legal (dogmatic), etc. Within the framework of this study, special emphasis is placed on the implementation of a comparative legal study of the problems of designing artificial intelligence and its subsequent application within the public law field in the context of the problems inherent in the basic stage of creation. Subsequently, potential scenarios for regulating responsibility for AI actions are identified. The research has resulted in asset of measures that can be applied in the legislative and law enforcement practice of relevant authorities implementing the integration of artificial intelligence into the sphere of public relations in Russia, as well as in the scientific field in order to determine subsequent vectors for analyzing the minimization of AI bias as a result of incorrect technology design in violation of the basic legal structures. Keywords: artificial intelligence, electronic face, comparative legal research of AI, tech design, biased by AI, secure AI, public law, administrative law, information law, law enforcement practiceThis article is automatically translated.
Governments of different countries are increasingly using technical and information systems of the system within the framework of law enforcement practice. For example, government officials use computer systems to sentence the accused, approve or refuse to provide state benefits, predict the locations of future crimes and impose a ban on entry across the state border [1-2]. In each case, technology is used to make important decisions regarding individual legal rights or the allocation of public resources. One of the important discoveries is that technologies can have "values" (aka policies and technical rules) implemented in their design [3]. In this context, we mean the influence of technical design (convenience of the interface, its accessibility, user intelligibility, legal awareness, etc.), which subsequently generates the same value, including through the formation of certain preferences for specific social subgroups. In this logic, an interesting starting point is the position of Lessig, who noted that the engineering architecture of a technological system can be considered as an analogue of legal regulation (in the context of prohibitions and legal aspects)[4]. The researcher noted that while laws explicitly and openly regulate society, technical design can similarly influence behavior or "values" for society, but often in a less obvious way. The same Internet technology (which in principle is one of the pillars of the current technological order), as part of the implementation of engineering solutions, faced problems in the field of anonymity [5]. On the one hand, anonymity ensured freedom of speech and behavior, on the other hand, it made it difficult to identify criminals and extremists. In this context, the issue of technical design becomes relevant, which acts as an implicit regulatory tool that is beyond the control of the government and left to developers. Considering the strategic national documents of France in the field of regulatory policy in relation to artificial intelligence (hereinafter - AI), it is necessary to pay attention to the need to develop not regulatory, but ethical requirements in the field of AI development, including the following: the introduction of explicable tools for auditing AI systems, the introduction of additional tools for monitoring the discriminatory impact of AI, control of the exclusive role of human judgment and the impossibility of its substitution by AI and other public aspects (creation of expert councils, etc.) [6]. At the same time, a different position is presented in the strategic document of the US President, which lacks these provisions and the entire regulatory emphasis is aimed at creating the necessary financial conditions for R&D, as well as at the safe integration of AI into the sphere of public administration from the point of view of data security and manageability [7]. A separate section on regulatory approaches for this technology has been prepared in the German national AI strategy [8]. In determining Germany's regulatory policy regarding AI, a significant emphasis is placed on the observance of fundamental rights that underlie the system of the Federal Republic of Germany and are enshrined in the Constitution of the Federal Republic of Germany, including, in particular, general freedom of action, protection of the privacy of a citizen and his control over his personal data. It is noted that the current legislation needs to be revised taking into account the technological development of AI and the presence of unresolved aspects in relation to this technology. Separately, it is proposed to consolidate the control of technical design from the point of view of security for the user. As part of the AI implementation strategy in the sphere of public legal relations in Russia, it is necessary to refer to the Decree of the President of the Russian Federation [9] and the Decree of the Government of the Russian Federation[10]. The provisions of the 49th Decree of the President of Russia indicate the need for a favorable data access regime, AI testing, removal of administrative barriers for AI products, development of unified standardization systems, stimulation of investment attraction, as well as the development of ethical rules. At the same time, the creation of a flexible AI regulation system is planned by 2030. From the point of view of the issues of regulating the technical design of AI, it is necessary to refer in more detail to the Decree of the Government of the Russian Federation, in subsection 4 of Section 1, in which the basic legal problems requiring solutions are noted. Among them are the balance between personal data protection and AI training, the definition of the subject and boundaries of AI regulation, the identification of AI systems when interacting with a person, the solution of issues of legal delegation of AI decisions, responsibility for AI actions, AI transparency, etc. Among the principles of AI regulation within the framework of this document, the obligation of developers to comply with the principles of compliance with laws when designing AI is fixed, i.e. the use of AI systems should not obviously lead to a violation of legal norms for the developer. At the level of doctrinal research, the implementation of implicit strategies in the practical law enforcement field is noted by Solon Barokas and Andrew Selbst. Researchers point to racial or ethnic biases that may arise during algorithmic decision-making [11]. Joshua Kroll, Edward Felten, and Daniel Citron draw attention to the lack of accountability in computer-based decision-making, which is increasingly common in the context of algorithmic government decisions [12-13]. Domestic scientists also identify significant problems arising from the issues of technical design and its regulatory and legal features. D.V. Bakhteev in his voluminous research notes the need at the projected stage of development to take into account respect for the basic constitutional rights of man and citizen, the competence of the developer, as well as a number of other requirements aimed at reducing the risks of misuse of AI [14]. V. E. Karpov et al. they focus on the problem of the lack of ethical verification in the design and development of AI, which in turn forms the practice of correcting AI actions upon failures, rather than the proactive nature of the response on the part of the developer [15]. A.V. Minbaleev [16] and O.A. Yastrebov and D.R. [17] raise the subsequent question of the possibility of allocating a separate the legal personality of AI as an electronic person, which may be of fundamental importance at the conditional stage of its "birth". When considering practical examples, it is proposed to refer to the US experience with MiDAS technology. As part of the automatic analysis of fraudulent actions related to receiving unemployment benefits, the Unemployment Insurance Agency of the State of Michigan has implemented this algorithm in its law enforcement practice. A distinctive feature of this algorithm is the absence of a procedure for challenging the AI's decision within the framework of the specified authority. The consequence of the incorrect operation of the corresponding AI is a large circle of victims of the actions of the authority, who appealed to the court and appealed the decisions wrongfully made by AI [18]. As part of the operative part of the court's decision, the imputation of guilt for the actions of AI to its developers is highlighted. The second notable example is SyRi, which was also used to detect fraud in the field of social security [19]. However, within the framework of this case, it was not so much the actions of the AI itself that were challenged in court, as the issues of the "black box" [20], within which the algorithm of the decision made by machine learning was not clear. The reasoning part of the judgment is based on the violation of article 8 of the European Convention on Human Rights when using this technology by civil servants. At the same time, it should be noted that the basis of the above cases on the part of civil servants and users is also a certain prejudice against AI itself, as a more impartial system in decision-making. In this regard, it seems possible for us to focus on the provisions of the Arbitration Procedural Code of Russia, where the option of the court to make decisions based on internal conviction is explicitly fixed (Part 1. Article 71 of the APC of the Russian Federation [21]). In addition, discussing this issue, it is impossible not to recall a case in a related field of law in foreign practice related to the Compass program, which assesses the risk of relapse in the defendant [22]. It should be understood that formally the recommendation report actually masks the main series of subjective judgments on the part of the system developers. This subjective choice includes the following positions: which data sources to use to build a predictive model, which parts of the selected data to include or exclude, how to weigh this data, which methods to use for data analysis, etc. However, since the recommendation is generated by an automated system using some kind of mechanistic process and is presented in a strict computational form, the result may have a misleading impression of almost mathematical objectivity within the framework of the law. Because of this aura of mechanistic objectivity, judges and other officials may pay more attention to computer recommendations, as opposed to comparable human assessments. This human tendency to unreasonably attribute value neutrality to the technological decision-making process (compared to people in a similar position) and to rely on the apparent accuracy of mathematical analysis and data-based analysis should be thoroughly investigated in the context of technological systems that affect the legal evaluation of evidence and judgments. However, it should be taken into account that legal norms have a share of uncertainty both in establishing guilt and its size. The same Code of Administrative Offences of the Russian Federation provides an opportunity for a judge, an authority and an official to take into account aggravating and mitigating factors when determining the circumstances of an offense, which directly affects in some cases the type of administrative punishment, its size and duration (Chapters 3 and 4 of the Administrative Code of the Russian Federation [23]) In this regard, one of the main functions of officials is to eliminate these uncertainties in the application of laws in specific circumstances, taking into account such provisions as the rule of law itself, the general practice of its application, judicial practice, as well as the State policy itself. Society often does not know the final answer to such legal uncertainty until a specific legal official makes a binding, final determination, preferring one set of possible arguments and interpretations to others. At the same time, it should be understood that the digitization of this process will take place exactly according to the same algorithms, including preferences in "judgments" and arguments of one side to other approaches. At the same time, it should be borne in mind that the set of implicit knowledge and signs that an official as a "person" can pay attention to is not directly fixed by the current legislation, the procedural aspect of evaluating evidence and other by-laws. Based on the above, it is proposed to carry out the following activities: 1. Introduction of additional tools of administrative punishment for developers of AI systems, if they do not comply with the principles of the projected compliance with AI laws, in the form of disqualification and a ban on the implementation of these activities. 2. Formation and legislative consolidation of the register of AI technologies, within which the user can see the scope of AI, its developer, the volume of users, technical reliability (mathematical and other tools used, facts and volume of hacking), administrative activity from the point of view of law (the number of violations detected, judicial appeal, etc.). 3. When using AI for public purposes of supervision and control, it is necessary to observe the transparency of AI algorithms (for developers), maintain the procedure for challenging AI actions by an official, introduce mandatory additional qualification requirements for education and knowledge of officials considering these complaints, ban the use of foreign AI systems for public purposes. References
1. Citron, D. K. (2007). Technological due process. Wash. UL Rev., 85, 1249.
2. Roth, A. (2015). Trial by machine. Geo. LJ, 104, 1245. 3. Stilgoe, J., & Guston, D. (2016). Responsible research and innovation. MIT Press. 4. Lessig, L. (2006). Code 2.0: Code and other laws of cyberspace. 5. Goldberg, I., & Wagner, D. (1998). TAZ servers and the rewebber network. 6. Villani C. et al. For a meaningful artificial intelligence: Towards a French and European strategy. Conseil national du numérique: [Website]. — URL: https://www.aiforhumanity.fr/pdfs/MissionVillani_Report_ENG-VF.pdf (Accessed: 17.05.2023). 7. Maintaining American Leadership in Artificial Intelligence. A Presidential Document by the Executive Office of the President on 02/14/2019: [Website]. — URL: https://www.federalregister.gov/documents/2019/02/14/2019-02544/maintaining-american-leadership-in-artificial-intelligence (Accessed: 17.05.2023). 8. Germany AI Strategy Report: [Website]. — URL: https://ai-watch.ec.europa.eu/countries/germany/germany-ai-strategy-report_en (Accessed: 17.05.2023). 9. Decree of the President of the Russian Federation of October 10, 2019 ¹ 490 "On the development of artificial intelligence in the Russian Federation": [Website]. — URL: https://www.garant.ru/products/ipo/prime/doc/72738946/ (Accessed: 17.05.2023). 10. Decree of the Government of the Russian Federation of August 19, 2020 No. 2129-r On approval of the Concept for the development of regulation of relations in the field of artificial intelligence technologies and robotics for the period up to 2024: [Website]. — URL: https://www.garant.ru/products/ipo/prime/doc/74460628/ (Accessed: 17.05.2023). 11. Barocas, S., & Selbst, A. D. (2016). Big data's disparate impact. California law review, 671-732. 12. Kroll, J. A. (2015). Accountable algorithms (Doctoral dissertation, Princeton University). 13. Citron, D. K. (2007). Technological due process. Wash. UL Rev., 85, 1249. 14. Bakhteev, D. V. (2019).. Risks and ethical and legal models of using artificial intelligence systems. Legal Research, (11), 1-11. 15. Karpov, V. E., Gotovtsev, P. M., & Roizenzon, G. V. (2018). On the issue of ethics and artificial intelligence systems. Filosofiia i obshchestvo, 2, 84-105.. 16. Minbaleev À. V. (2018). PROBLEMS OF REGULATING ARTIFICIAL INTELLIGENCE. Bulletin of the South Ural State University Ser. Law, 18, 4, 82-87. 17. Yastrebov Î. À., Aksenova Ì. À.(2022). THE LAW ISSUES OF IMPACT OF ARTIFICIAL INTELLIGENCE ON THE ADMINISTRATIVE REGIME FOR COMBATING MONEY LAUNDERING AND TERRORISM FINANCING. Legal policy and legal life, 3, 84-109. 18. Michigan’s MiDAS Unemployment System: Algorithm Alchemy Created Lead, Not Gold. : [Website]. — URL: https://spectrum.ieee.org/michigans-midas-unemployment-system-algorithm-alchemy-that-created-lead-not-gold#toggle-gdpr (Accessed: 17.05.2023). 19. SyRI legislation in breach of European Convention on Human Rights: [Website]. — URL: https://www.rechtspraak.nl/Organisatie-en-contact/Organisatie/Rechtbanken/Rechtbank-Den-Haag/Nieuws/Paginas/SyRI-legislation-in-breach-of-European-Convention-on-Human-Rights.aspx (Accessed: 17.05.2023). 20. District Court of the Hague, 6 March 2020, ECLI:NL:RBDHA:2020:865: [Website]. — URL: uitspraken.rechtspraak.nl/inziendocument?id=ECLI:NL:RBDHA:2020:1878 (Accessed: 17.05.2023). 21. Arbitration Procedure Code of the Russian Federation dated July 24, 2002 No. 95-FZ // Collection of Legislation of the Russian Federation dated July 29, 2002 No. 30 Art. 3012. 22. State v. Loomis. Wisconsin Supreme Court Requires Warning Before Use of Algorithmic Risk Assessments in Sentencing.: [Website]. — URL: https://harvardlawreview.org/print/vol-130/state-v-loomis/ (Accessed: 17.05.2023). 23. "Code of the Russian Federation on Administrative Offenses" dated December 30, 2001 N 195-FZ // "Collected Legislation of the Russian Federation", 01/07/2002, N 1 (part 1), art. 1
First Peer Review
Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
Second Peer Review
Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
|