Library
|
Your profile |
Legal Studies
Reference:
Volkov V.E.
Legal recognition of artificial intelligence technologies in the context of the constitutional values of the Russian state
// Legal Studies.
2023. № 3.
P. 51-61.
DOI: 10.25136/2409-7136.2023.3.40425 EDN: MVQFHJ URL: https://en.nbpublish.com/library_read_article.php?id=40425
Legal recognition of artificial intelligence technologies in the context of the constitutional values of the Russian state
DOI: 10.25136/2409-7136.2023.3.40425EDN: MVQFHJReceived: 06-04-2023Published: 13-04-2023Abstract: The purpose of the article is to form approaches to the public legal regulation of artificial intelligence technologies. The subject of the work is the social relations that have developed in the field of legal registration of modern digital technologies of "weak" artificial intelligence - computer vision, natural language processing, speech recognition and synthesis, as well as intellectual decision support. The relevance of the research is determined by the need to bring the content of legal acts in line with the current level of information technology development. The work is based on a combination of general philosophical, general scientific and special methods of cognition – concrete historical analysis, formal legal (dogmatic) method, as well as the method of comparative jurisprudence. The author criticizes the anthropomorphic approach to understanding artificial intelligence, based on the analogy of technology with the human mind, implemented in the terminological apparatus of existing legal acts. It is proposed to limit the legal interpretation of artificial intelligence to a set of specific information technologies that can be used by a person in solving applied problems. The analysis of possible directions of constitutional and legal recognition of artificial intelligence technologies and the legal consequences of their influence on the implementation of the constitutional values of equality and privacy is carried out. The argumentation in favor of a combination of social and technical regulation of relations in the field of application of artificial intelligence technologies is proposed. Keywords: artificial intelligence, digital technology, information technology, public law, constitutional values, human rights, privacy, equality, discrimination, technical regulationThis article is automatically translated. Works devoted to the regulatory regulation of artificial intelligence systems, it is customary to begin with a statement of the determining importance of digital technologies for scientific and technological progress. The possibility of replacing the human mind with a machine mind is often predicted, which can lead to the transformation of law, its mechanization and getting rid of the vices of human nature. However, such expectations are not always confirmed by specialists in the field of exact and natural sciences [1]. Technical literature defines artificial intelligence as a set of information technologies that can help a person in solving specific applied tasks, but do not have superpowers in comparison with people. In this regard, in the legal characterization of artificial intelligence technologies, it is necessary to proceed from the fact that the term "intelligence" itself does not have an unambiguous and universally recognized definition. Approaches to its understanding differ significantly depending on the affiliation of the interpreter to a particular philosophical school. In addition, modern physiology does not provide an exhaustive understanding of the principles and mechanisms of higher nervous activity of a person as a prototype of artificial intelligence. Therefore, the definition of artificial intelligence based on analogy with the manifestations of reason in living nature can hardly be justified at the current level of knowledge [2]. The use of the terms "artificial intelligence", "neural network", "machine learning" can mean the development of new digital technologies, but does not require analogies with the intelligence of living organisms. The anthropomorphic understanding of artificial intelligence is present not only in everyday consciousness, but is also implemented in the terminology of legal acts. The normative definition of artificial intelligence in paragraph 1 of Article 2 of Federal Law No. 123-FZ dated 04/24/2020 "On Conducting an Experiment to Establish Special Regulation in order to Create the Necessary Conditions for the Development and Implementation of Artificial Intelligence Technologies in the Subject of the Russian Federation - the Federal City of Moscow and Amendments to Articles 6 and 10 of the Federal Law "On Personal Data" it proceeds from his ability to imitate human cognitive functions and involves comparing the results of his work with the results of human intellectual activity. When describing machine learning technology in the National Strategy for the Development of Artificial Intelligence, it is stated that neural networks are organized by analogy with the human brain (see Decree of the President of the Russian Federation No. 490 of 10.10.2019 "On the development of artificial intelligence in the Russian Federation"). Although it is known that in the prototypes of the animal world, the scale and principles of human brain activity, connections and functions of neurons in the nervous system have not yet been clarified [2]. We believe that the characterization of artificial intelligence based on analogy with the human mind or wildlife in general is overly optimistic. It seems more productive to look at artificial intelligence as a complex of information technologies that allow processing and analyzing data in order to perform specific tasks outside of the manifestations of human mental activity. The controversial understanding of artificial intelligence in the Russian legal doctrine may also be associated with discrepancies in the translation of the English term "artificial intelligence". Unlike the traditional Russian translation, in English this term does not have an anthropomorphic meaning. The word "intelligence" rather means "the ability to make the right decisions" or "reasonable reasoning", rather than human intelligence, for which there is an English equivalent of "intellect" [3, p. 10]. The inconsistency of the terminology used in the field of artificial intelligence is recognized in the program documents. The concept of the development of regulation of relations in the field of artificial intelligence and robotics technologies, approved by the Decree of the Government of the Russian Federation No. 2129-r dated 08/19/2020, takes into account the lack of a clear understanding of such terms as "artificial intelligence", "robot", "smart robot", "robotics" and "intelligent agent". To eliminate contradictions, it is proposed to use different definitions depending on the field of application of artificial intelligence technologies, as well as to avoid introducing a single definition of these terms into the legislation of the Russian Federation. If it is necessary to use definitions in regulatory legal acts, it is proposed to use the terminology provided for by technical standards, or to give definitions relevant to this area of regulation. However, it is likely that the proposed ideas will not be fully implemented. In the national standard of the Russian Federation PNST 553-2021 "Information technologies. Artificial intelligence. Terms and definitions" [4], adopted after the approval of the Concept, the same definition of artificial intelligence is used, containing unjustified anthropomorphism: a set of technological solutions that allows simulating human cognitive functions (including self-learning and finding solutions without a predetermined algorithm) and obtaining results comparable, at least, with the results of performing specific tasks. intellectual activity of a person. This definition is recommended for use in regulatory documents, legal, technical, organizational and administrative documentation, scientific, educational and reference literature. The controversial nature of the definition of artificial intelligence does not allow creating a solid basis for its recognition as an object of legal regulation. It may be more productive to discuss the regulation of specific existing technical solutions in the field of artificial intelligence, such as computer vision, natural language processing, speech recognition and synthesis, as well as intelligent decision support. The limitation of the subject of legal regulation by the listed technologies corresponds to the theoretical concept of distinguishing between strong and weak artificial intelligence. The current level of development of artificial intelligence technologies corresponds to the characteristic of "weak" artificial intelligence. He is able to solve only highly specialized tasks. The possibility of creating a "strong" artificial intelligence capable of interacting with the environment at the level of human consciousness is currently not proven. Therefore, attempts to regulate relations regarding artificial intelligence as such in isolation from its concrete historical manifestations will mean unjustifiably wasteful use of the normative resource of law, and their doctrinal justification is unlikely to go beyond scientific discussions. At the same time, the regulation of relations related to the use of "weak" artificial intelligence technologies is a very urgent task. Systems created on their basis can already function autonomously without the ability to directly perceive ethical and legal norms, and may also not take them into account when making decisions. This feature of autonomous systems can have a serious social effect, including affecting the constitutional foundations of the Russian state and society. Next, we will consider the implementation of the technology of "weak" artificial intelligence, taking into account the values that are given constitutional importance in the Russian Federation. With the development of automation of state and public life, ideas about the inclusion of regulations on artificial intelligence in the Constitution of Russia are increasingly being expressed. Constitutional amendments are proposed related to the expansion of guarantees of privacy, "the consolidation at the constitutional level of the principle of openness of algorithms in order to make the work of artificial intelligence transparent" [5, pp. 123-128]. We believe that despite the importance of these issues, they should not be solved by changing or supplementing constitutional norms. Firstly, most of the relations that are proposed to be improved by changing the Constitution are related to determining the legal status of an individual. The relevant rules are in the second chapter of the Constitution and cannot be changed by amendment. Their modification is possible only by revising the entire Constitution, which is hardly commensurate with the task of reflecting the features of highly volatile information technologies. Secondly, proposals to amend the Constitution, as a rule, do not concern specific artificial intelligence technologies, but the entire artificial intelligence as a whole as a social phenomenon. However, at this historical stage, it makes sense to regulate individual artificial intelligence technologies, and the phenomenon of artificial intelligence itself is too amorphous for constitutional registration. Finally, the development of artificial intelligence technologies has not led to the creation of new constitutional norms in the countries that are technological leaders — the United States, China or the states of the European Union. It turns out that the absence of such norms is not an obstacle either to the realization of the constitutional status of an individual or to the development of new technologies. The prematurity of constitutional amendments does not exclude the need to assess the impact of artificial intelligence technologies on the constitutional and legal status of an individual. In this context, it is important to determine the right direction of scientific research, since the fertile topic of legal futurology offers many, at first glance, interesting topics, not all of which can really be the scope of application of the methodology of legal science. So, despite the futility of reasoning about the possibility of determining the legal personality of modern artificial intelligence systems by analogy with the status of a person, the scientific literature is full of arguments about the responsibility of artificial intelligence carriers, their volitional acts and other manifestations, almost human reason. We believe that they cannot be considered as factors of legal reality. And not only because of the relative underdevelopment of artificial intelligence technologies. The assertion of the legal personality of artificial intelligence contradicts the constitutional logic, according to which a person, his rights and freedoms are the highest value. If technological progress leads to the emergence of strong artificial intelligence, this should lead to a complete revision of the entire national, and possibly international, legal order, and there is no place for electronic persons in the system of the existing constitution. It should also be borne in mind that the debate about whether artificial intelligence systems should have legal personality is primarily conducted in the interests of large technology companies. Giving artificial intelligence systems rights and responsibilities mainly corresponds to the desire of their creators to get rid of responsibility for the actions of autonomous systems and shift this responsibility to users or to the carriers of artificial intelligence themselves. If such systems are recognized as subjects of law, their legal status will be illusory, which will most likely lead to the actual irresponsibility of their creators. Therefore, the recognition of the legal personality of artificial intelligence systems seems to be a very distant prospect, which will not necessarily be favorable for humanity. More fruitful is the interest in assessing the prospects for the impact of artificial intelligence technologies on the implementation of the constitutional principle of equality before the law and the court. Although, at first glance, machines have significant advantages over people due to their impartiality, in reality, the ideal of their equal treatment of different people is still difficult to achieve. Moreover, modern machine learning has features that can lead to fixation and strengthening of bias against certain social groups, especially minorities. The data used to train artificial intelligence systems may contain social prejudices, and may also be incomplete or poorly organized. In addition, there is a possibility of deliberately creating models that use consumer preferences for the purposes of unfair competition. If the data sets contain biases, then they will be reproduced as a result of machine learning and this is highly likely to lead to an increase in the social significance of these biases. There are cases when the use of large data sets led to automated decision-making, which increased the inequality of opportunities of people depending on their race [6], gender [7] and social status [8]. For example, modern facial recognition systems show significantly worse results when recognizing the faces of women with a dark skin tone than men with light skin [6]. Therefore, it is important to pay special attention to situations in which artificial intelligence technologies are used for autonomous decision-making in relation to representatives of vulnerable social groups, such as children and people with disabilities. Threatening situations can also arise in relationships where there is an asymmetry in the distribution of power or information, for example, in the relationship between employers and employees, or between entrepreneurs and consumers. Thus, the use of machine learning jeopardizes the implementation of the principle of equality and requires the application of legal measures to ensure its compliance in the conditions of widespread use of imperfect artificial intelligence technologies. According to part 2 of Article 19 of the Constitution, the State guarantees equality of human and civil rights and freedoms. The list of circumstances, regardless of which equality is guaranteed, is open. Therefore, it hardly makes sense to follow the path of clarifying constitutional formulations, they already provide all the necessary tools to improve guarantees of equal rights and equal opportunities for people in the new technological reality. Rather, additional legislative and subordinate regulation is required in the areas most affected by technology. It should include both social norms and technical solutions aimed at the inadmissibility of the use of technologies that lead to a violation of the constitutional principle of equality of rights and opportunities for their implementation. Technical solutions can be achieved through the elimination of discriminatory features at the stage of data collection and in the process of learning machine learning algorithms. However, even when using the best practices of machine learning, discrimination may persist if the data used for training contains biased descriptions. Therefore, it is advisable to expand the legal possibilities of public control in areas potentially subject to the consequences of discriminatory machine learning, as well as the use of legal measures aimed at providing feedback from citizens with developers of artificial intelligence systems. Algorithms should be designed in such a way that their processes are transparent to society and provide an analysis of the goals of the systems, as well as the technical limitations inherent in them. As an example of the cautious implementation of legal norms in the field of auditing machine learning technologies, we can cite the law on amendments and additions to the Administrative Code of the City of New York, which entered into force in 2021 [9]. It prohibits employers from using automated employee selection tools without checking them for bias. The law requires applicants to be notified about the use of automated systems to evaluate candidates for employment, as well as for promotion. Administrative responsibility is provided for violation of the requirements of the law. The widespread use of computer vision and machine learning technologies also requires improving the guarantees of the constitutional right to privacy. Arguments in favor of increasing the level of personal protection surround us literally everywhere in the form of surveillance cameras recording thousands of hours of video materials analyzed in real time. Without the use of artificial intelligence, these recordings would be largely useless, since direct human study of them would require a disproportionately large amount of time. However, machine learning systems can be inexpensively trained to analyze large amounts of information and recognize people by facial image, heartbeat behavior patterns, gait, device MAC addresses and other modalities. The most real threat to privacy is associated with the possibility of using machine learning algorithms to deanonymize people. When processing large amounts of data containing sensitive information, such as medical or financial records, anonymization methods are usually used to protect privacy. However, with the help of machine learning algorithms, it is possible to deanonymize data and gain access to confidential information. For example, using machine learning methods, it is possible to link medical and financial information obtained from different sources and, based on a comparison of data, determine the personality characteristics of a particular person without his consent. Another type of threats associated with the use of machine learning is manifested in the dependence of machine learning on the availability of large amounts of training data. To achieve high accuracy of the model, it is necessary to train it on the maximum possible amount of data. However, the concentration of data in centralized databases increases the risk of leakage of information about the private life of citizens. Research shows that there is a significant risk of major leaks from databases created by public authorities, and the risk increases as the amount of data they store increases.For example, information in the Moscow city surveillance system is available to 16 thousand users, including law enforcement officers, city state and municipal authorities, as well as organizations subordinate to them. It is not surprising that information leaks happen. In 2020, Anna Kuznetsova, an activist of one of the human rights organizations, ordered information on tracking her face from Moscow cameras on the darknet and she managed to get data on herself for only 16 thousand rubles. This is just one of many examples demonstrating the insufficient security of data on the private life of citizens collected in large databases. In addition to the risk of data leakage, the concentration of information in centralized databases also increases the likelihood of its abuse. The information obtained can be used for mass surveillance and control purposes, which contradicts the basic principles of democracy and the rule of law. In addition, the monopoly position of large technology companies in the field of machine learning can also create additional problems, since the use of machine learning algorithms to analyze user behavior is technically possible without the consent of the latter. In general, the use of machine learning in the field of privacy protection requires a deep and comprehensive approach that should combine both social regulation and technical protection measures. This is the only way to ensure reliable protection of the private life of citizens in the conditions of universal digitalization of society. Legal guarantees of confidentiality of human sensitive data are enshrined in articles 23 and 24 of the Constitution of the Russian Federation. They enshrine the right to privacy and provide for special rules that facilitate its implementation, such as the secrecy of communication and the prohibition on the collection, storage, use and dissemination of information about a person's private life without his consent. These constitutional provisions are quite sufficient to build an effective privacy protection system even in conditions of widespread machine learning. We have already witnessed the spread of the constitutional secrecy of communication to modern methods of communication by e-mail, messengers and social networks. There is also no reason to doubt the effectiveness of the norms of the Constitution on the inadmissibility of processing information about a person's private life without his consent. With reasonable implementation of constitutional norms in legislation and stable judicial practice, the existing constitutional regulation will be sufficient to effectively protect the privacy of citizens. At the same time, it would be useful to supplement the measures of social regulation with technical standards. In the circumstances, the concept of "built-in privacy", which complements social regulation with technical standards, deserves attention. It is most developed in the practice of the European Union and consists in the application of technical and organizational measures aimed at minimizing the collected personal data, pseudonymization and encryption of data, deletion of data that has lost relevance. The concept of built-in privacy proceeds from the reasonable assumption that digital information, due to its properties, easily gets out of the control of those who process it. Therefore, in principle, it is necessary to refrain from collecting sensitive information for people where possible. Moral and ethical norms also play an important role in ensuring the protection of the private life of citizens in countries with developed economies. For example, since the beginning of 2021, Facebook (recognized as an extremist organization in Russia) has stopped using the facial recognition system on its platform and has pledged to delete all user data associated with it. This was done in response to the growing risks of violating the confidentiality of personal data without appropriate legal guarantees. Since 2018, Microsoft, Amazon and IBM have suspended the provision of their facial recognition technologies to law enforcement agencies until legislation is passed that will determine the permitted limits of their use. References
1. Russell S, Norvig P. Artificial Intelligence: A Modern Approach (4th Edition). Pearson 2020, ISBN 9780134610993.
2. Digital Economy: Current Directions of Legal Regulation: scientific and practical guide / ed. by I. I. Kucherov, S. A. Sinitsyn.-Moscow: Norma : ILCL, 2022.-376 p. doi 10.12737/1839690. 3. Knowledge bases of intelligent systems / T.A. Gavrilova, V.F. Khoroshevsky.-SPb: Piter, 2000.-384 p. – ISBN 5-272-00071-4. 4. Preliminary National Standard of the Russian Federation 553-2021 Information Technologies (IT). Artificial intelligence. Terms and Definitions.-Moscow: FGBU RST, 2021. 5. Filipova I.A. Legal regulation of artificial intelligence textbook, 2nd edition / I.A. Filipova.-Nizhny Novgorod: Nizhny Novgorod State University, 2022. 6. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, PMLR 81:77-91, 2018. Retrieved from https://bit.ly/3Fc8fTd. 7. Amazon scraps secret AI recruiting tool that showed bias against women. Retrieved from https://bit.ly/3Bfw5MI. 8. An individual-level socioeconomic measure for assessing algorithmic bias in health care settings: A case for HOUSES index. Journal of the American Medical Informatics Association. Retrieved from https://bit.ly/3W3pI7o. doi 10.1101/2021.08.10.21261833. 9. A Local Law to amend the administrative code of the city of New York, in relation to automated employment decision tools. Retrieved from https://bit.ly/3iOkrlL.
Peer Review
Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
|