Translate this page:
Please select your language to translate the article


You can just close the window to don't translate
Library
Your profile

Back to contents

Philosophical Thought
Reference:

Ethical Expertise of Artificial Intelligence Technologies in Subject-oriented Social Relationships

Malakhova Elena Vladimirovna

ORCID: 0000-0002-1829-8234

PhD in Philosophy

Doctoral Student of the Institute of Philosophy, Russian Academy of Sciences.

115409, Russia, Moscow, Goncharnaya str., 12, p. 1.

e.v.malahova@mail.ru
Other publications by this author
 

 

DOI:

10.25136/2409-8728.2022.10.39011

EDN:

HSXQXS

Received:

23-10-2022


Published:

31-10-2022


Abstract: The ethical examination of modern technologies, especially those that use artificial intelligence, can be based on an approach that focuses not on the technology itself, but on the subject as a human-agent who can be the developer and user of this technology. In this article, we consider the concept of an ethical subject as an agent with the fundamental ability to make ethically significant choices. The peculiarity of such choices is that they are significant not only for society, but also for the ethical subjects themselves, who are always not only a means, but also the goal of moral actions. That is why such ethical subjects can be only persons or groups of people, but not technology, even the most advanced – otherwise it would be the goal of ethical choices for itself and for people. Ethical expertise of a technology capable of acting independently of a person is possible, as we believe, through its «training» to recognize situations of applying those solutions to ethical dilemmas that are provided by a person, and valuation of the results of such recognition. A part of such «training» may be worth providing not only to developers, but also to users, remembering the fact that all these solutions should also be brought into compliance with local legislative norms.


Keywords:

ethical expertise, technology, artificial intelligence, utilitarianism, consequentialism, deontology, virtue ethics, ethical subject, metasubject, social relationships

1. INTRODUCTION

Ethical expertise of technologies has become one of the topical issues of the modern scientific agenda, of course, not yesterday, but also recently enough that the pluralism of opinions characteristic of newly emerging fields of research still prevails, without giving way to one or several few generally accepted paradigms.

The relevance of this topic is evidenced, in particular, by the fact that already in 2020 there were more than 80 documents considering ethical standards related to the use of such technology as artificial intelligence (hereinafter AI) [1]. Since then, the number of such documents accepted both by corporations and at the state level in a number of countries has only continued to grow. The «Code of Ethics in the field of Artificial Intelligence» was adopted in 2021 in Russia [Code of Ethics in the field of Artificial Intelligence, 2021], shortly before a similar document was published in China [The Ethical Norms for the New Generation Artificial Intelligence, 2021]. UNESCO also gave its recommendations on these issues [Recommendation on the Ethics of Artificial Intelligence, 2022].

The rapid development of digital technologies and their implementation in a number of areas: industrial production, transport, education, healthcare, etc. – forces us to pay attention to the development of norms regulating the processes of creation and use of these technologies. These norms must necessarily be both legal and ethical, since not all possible forms of activity in general can become the subject of legal regulation, while ethical regulation is potentially capable of covering all possible cases, while, however, often sacrificing clarity and unambiguity of assessments [2, 3].

That is, as soon as we were able to observe such a level of technology development at which technology in some cases became capable of performing actions independently of a person, questions inevitably began to arise regarding the ethical assessment of these actions, which in itself is quite justified [4]. However, other trends cause concern: namely, the very frequent raising of questions about the «ethics of algorithms», «ethics of AI». Often such formulations are not only a desire for brevity. Often, those who use them assume the possibility for machines, programs, robots to act ethically or unethically by themselves and give solutions to some ethical dilemmas [5]. Such an approach seems to us a bit excessive for a number of reasons, which will be discussed later in the work.

We will also try to highlight the following rather important question: does every agent acting in society automatically become an ethical subject, and what qualities should the latter possess in general.

All of the above is intended to serve the achievement of the main goal of this work – by using a subject-oriented approach to create a methodological basis for the ethical examination of modern technologies, especially those that use AI for their work.

2. METHODOLOGY

In order to be able to speak about an ethical subject and raise the question of whether it can be a technology, it is necessary to clarify some basic points for this study on which such reasoning can in principle be based.

First, it is the concept of the subject. Secondly, consideration of this subject’s ability (or inability) to make ethical choices based not only on the social environment, but also on his/her/its own properties, capabilities and needs. Thirdly, finally, the peculiarities of the subject's attitude to the surrounding reality and the ability to make choices based on short-term, medium-term and long-term goals.

So, the subject here means, first of all, an agent or actor endowed with a number of specific properties, not all of which, as we will see, can in principle be attributed to technology, no matter how developed one.

The concept of the subject is well illustrated by the approach of V.A. Petrovsky [6], who identifies four main characteristics of the subject (as a person or a group of people):

1) purposefulness as the ability to consciously and independently set goals. For us, this point is very important, as it allows us to clearly identify those who are able to be subjects in digital societies of both the present and the foreseeable future. Moreover, it can be assumed that this feature of the subject is in some way manifest and «collective», since it can be fully implemented only if there are three subsequent characteristics.

2) reflexion as the ability of the subject to form an image of himself/herself. This feature is also fundamental for the possibility of constant assessment of the formed image through the use of socially conditioned normative (including ethical) systems.

3) the freedom of will of the subject, without which there can be neither independent conscious goal-setting, nor an assessment by the subjects of both themselves and the results achieved in the course of activity.

4) the ability of the subjects to develop themselves as an improvement of an adaptation to constantly changing inner and outer conditions.

If we speak about the first point, then we certainly recognize that individuals and consolidated human collectives have the ability to set goals. Goal-setting is directly related to the subject's ability to form a holistic worldview, where both the image of the subject and the image of the rest of the universe are present – and all these images are not just known, but what is often even more important, are evaluated in a certain way and, in accordance with this, are ranked and lined up in certain rows, including possible/desirable/achievable goals.

The ability of a subject to free goal-setting and self-development directly related to this is realized if the subject has a worldview as a system of views on himself/herself and the rest of reality, ranked in a certain way based on such a fundamental factor as the value system of this subject.

The subject is able to build a relationship with the outside world, depending on how he/she ranks the goals, assessing, accordingly, the efforts and means to achieve them as more or less legitimate.

Here we consider on what basis the subjects are able to make assessments, and how they can see themselves and their relationship with the objects of assessment, depending on the chosen system in which this interaction occurs. These systems, as we believe, can and even should be considered simultaneously from two positions: 1) as from the point of view of the subjects and their intentions, determining the possibility of a particular value relationship; 2) so from the point of view external to the subjects, but allowing to determine their position in the emerging system of relations and characteristics of the system itself. The first position makes it possible to use philosophical concepts from the field of ethics as a methodological basis, while the second allows to turn to the paradigms of cybernetics as a science of managing complex systems [7].

In order to build a complete scheme that includes a range of such assessments, we propose to use existing cybernetic approaches, specifically, third-order cybernetics [8]. This approach includes cybernetics of the first and second orders, not denying, but complementing them.

The cybernetics of the first order is mainly about the influence of the subject (actor) on a certain object (the one that is undergoing action), and the latter is not considered as an actor equal to the subject, even if they both have consciousness. The interaction in this case will be unidirectional – from the subject to the object.

In second-order cybernetics, not only subject-object interaction is considered, but also subject-subject interaction, when mutually directed interaction is possible between really or potentially equal participants, in which both of them need to take into account not only their own goals, but also the goals and probable interests of the other side.

In third-order cybernetics, not only the two types of interactions mentioned above are possible, but also the interaction of the subject-meta-subject type, where the second part can mean an unlimited number of individual and collective subjects, the number and intentions of which can constantly change and not always be predictable.

3. ETHICAL EXPERTISE OF TECHNOLOGIES IN THE MIRROR OF A SUBJECT-ORIENTED APPROACH

3.1 Subject-object interaction

In this case, the subject uses technology to achieve a simple, clearly defined goal, which corresponds to first-order cybernetics – the subject instrumentally works with technology and evaluates it from the point of view of applied efficiency.

From the point of view of ethics, utilitarianism as a consequentialist approach [9] corresponds to this line of thought. The assessment is based on the compliance of the result with the stated goal. From the point of view of cybernetics, one simple connection is formed between the subject and the object. Other possible connections that appear in any social system may not be taken into account, which in the future may lead to the absence or destruction of the accompanying parts of the system, represented by additional social consequences. This is why such a strategy is designed for the short term and can be effective only in it. In the medium and long term, it reveals shortcomings, mainly due to the fact that additional relationships have not been built and/or taken into account, which causes difficulties with embedding the result into a more general social context.

Utilitarian concepts in ethics evaluate an act as ethical or unethical, depending on how much it corresponds to the stated goal, what consequences it leads to. Utilitarianism, as one of perhaps the most influential ethical teachings, is often called one of the directions of consequentialism, which considers actions primarily from the point of view of consequences. Such an approach, in principle, is intuitively understandable to everyone, regardless of the fact of acquaintance with certain philosophical concepts.

Classical utilitarianism was based on the concept of maximizing the good (or happiness, or pleasure) for the maximum number of people. And with slight variations, this approach has been preserved in utilitarianism up to this day, as well as the inevitable and difficult-to-resolve issues that have arisen along with it.

First, of course, there is the question of what exactly should be understood by the good, which is often identified with practical benefits, which does not make it any easier to solve the problem. In the absence of clear criteria for the good, happiness, usefulness and even pleasure, utilitarianism was often subjected to criticism [10] because it gave scope for an arbitrary and voluntaristic interpretation of all of the above.

With all the positive features of this approach, legitimate questions also arise both about how much we are able to assess the consequences, especially long-term ones, and to what extent the ends can justify the means. In addition, if the goals change in this case, the assessments of certain actions will inevitably have to change, which can lead to ethical relativism, where potentially any action can be justified.

Thus, as we can see, utilitarianism is experiencing understandable problems precisely with the development of criteria for evaluating actions as a kind of «coordinate system» within which a particular solution can only be evaluated in terms of its benefit or harm. Nevertheless, the utilitarian approach may be quite legitimate if we are talking about evaluating not all actions in general based on some idea of utility that claims to be universal, but about certain specific actions that are considered through the prism of previously established norms and criteria. At the same time, it would probably even be possible to algorithmize such estimates, which may be important precisely for considering technologies (such as AI) that interest us.

3.2. Subject-subject interaction.

In this case the subject (as an actor) uses technology to communicate with other subjects, takes into account their participation in the general process of interaction with technology, can assess the impact exerted on them by both technology and forms of interaction. The technology is evaluated by the subject not only from the point of view of achieving the set short-term instrumental goals, but also from the point of view of its impact on the construction of additional social relationships and/or the preservation of existing ones.

From the point of view of ethics, this is more consistent with deontological approaches [11] that evaluate not so much the goals as the means to achieve them. At the same time, it is assumed that the assessment of the goals as positive has already been made, and only after that we evaluate the means. Therefore, the consequentialist and deontological approach in this scheme do not contradict each other, and the second from the standpoint of building a value relationships will include and complement the first.

From the point of view of second-order cybernetics, a more complex system of relationships is being formed. Other subjects are added to the subject-technology bundle, for interaction with which the technology itself acts as a means, therefore it is designed to create new and not destroy existing connections between subjects. Technology is beginning to be evaluated not only instrumentally, but also in terms of influence on subjects. Such a shift in emphasis is caused, among other things, by the massive spread of technology and the building of more complex chains of relationships between subjects with its help. This approach works quite well in the medium term.

In the deontological approach, the emphasis is placed on the presence of a certain set of unchangeable moral norms that must be followed (both common to all mankind and specific to individual groups and professions). This is the complete opposite of ethical relativism. Again, with all the need for such generally valid norms, problems arise, both in cases when the norms are too generalized, and when they are formulated too narrowly and inflexibly.

Although the term «deontology» itself is attributed to Bentham, one of the key figures here was Kant with his idea about the categorical imperative as a fundamental moral norm, which should be followed for its own sake, regardless of pleasure or benefit.

The vagueness of criteria for such maximally general norms within deontology as the categorical imperative or the golden rule of morality, again, has repeatedly caused criticism for its isolation from, often, the very possibility of practical application. In addition, deontological constructions, as a rule, did not take into account either intentions or those «immediate» consequences that may occur with the direct application of certain norms.

However, as in the case of utilitarianism, these problems are removed if we are not talking about duty in general, dictating to act, as Kant stated, «according to a maxim that can itself become a universal law», but about private areas that need to develop clearly defined norms designed to limit the possibility of any relativistic interpretations as much as possible. One of the best examples, perhaps, is given here by medical deontology [12].

Thus, the deontological approach, as we believe, may well be used to develop specific norms for the use of technologies (including AI) in areas where relativism is undesirable, provided that this approach does not claim to be universal and is deployed within the framework of an already existing stable structure of relations with predetermined values.

3.3. Interaction of the subject-metasubject type.

In this case the subject uses technology to interact with a potentially unlimited number of other diverse subjects (individuals, groups) on a planetary scale. Technology can at this stage not only be used as a means, but also act on behalf of a real, but not always known subjects, imitating certain aspects of their activity. The technology is assessed by the subjects as affecting themselves, as well as all other involved (including potentially, in the future) known and unknown subjects, the number of which may also be unknown. Technology is assessed as one of the key factors influencing the construction and maintenance of diverse social relationships.

From the point of view of ethics, such an approach as the ethics of virtue can be used here, focusing primarily on the desired properties and qualities of the subjects, which will allow them to best choose goals and means in situations of increased complexity and uncertainty.

From the point of view of third-order cybernetics, the most comprehensive, complex and constantly changing system of interrelations is being formed here. In it, as subsystems of the lowest order, two previously considered ones are presented, which it includes. Since technology is able to form an unlimited and unpredictable number of connections for the subjects, it begins to be evaluated as influencing them and even partially forming them, therefore, its role is considered for the emergence of those key qualities in the subjects that will further contribute to their well-being, including successful interaction in this environment with other subjects and with the technology itself. In this case, the subject becomes a certain constant, the center of an ever-changing system of diverse relationships formed with the help of technology. This approach is aimed at a long-term perspective.

One of the most famous authors who developed the ethics of virtue was A. MacIntyre [13]. He believes that any attempts to substantiate moral norms in modern society are doomed to failure, if only because they are undertaken in isolation from the social, and even more so, from the ontological context. As for the last consideration, which seems almost impossible to modern consciousness, it is enough to recall Descartes with his metaphor of a tree, where branches originating from a single trunk and roots serve such areas as medicine, mechanics and ethics. Thus, back in the XVII century, we see attempts to build a holistic picture of the world and at the same time knowledge about the world, which is now completely gone, leaving us both a gap between natural science and humanitarian knowledge, and the isolation of ethics, in fact, from both. After all, as we have seen, attempts to justify ethics through social utility and effectiveness of actions lead, at best, to utilitarianism, at worst – to relativism, which sooner or later usually degenerates into ethical nihilism.

MacIntyre pays special attention to the ancient concept of virtue, which, in his opinion, was conceived as the ability to perform certain actions and at the same time understand the essence, meaning and purpose of these actions. According to MacIntyre, we can turn to the legacy of Aristotle, who considers not norms (which are so difficult to justify), but virtues as a condition of moral behavior both from a personal and social point of view. Virtues, on the other hand, can be realized and manifested only through practices, thereby forming a person's life narrative as a movement towards the maximum realization of available opportunities for personal development. However, in order for an individual to perceive some primary ideas about these virtues in the course of education and training, it is necessary to have a moral tradition in which these ideas can be fixed and transmitted.

From this point of view, «good» technology in general and, in particular, «good» AI can be considered such if they are evaluated from the position of not only specific (often situational) benefits or compliance with strictly prescribed norms, but, first of all, from the position of how much they are able to help the implementation of personal and social development of individual and collective subjects – people and groups.

4. RESULTS AND CONCLUSION

A subject-oriented approach to the ethical expertise of modern technologies can help to identify the levels of such expertise.

If we are talking about short-term goals, then we can use an utilitarian assessment: actions are considered correct based on their compliance with the interests and goals of a specific group of people. Both the goals themselves and the criteria for compliance with them can and even should be clearly defined in form of instructions. Criteria such as the interests of the group can be determined through legislative and other documents, and this may be sufficient in this case. The expertise can be carried out by developers at the testing stage, then, if there is negative feedback, the product is modified.

If we talk about the medium term, then in this case it is important not only to achieve goals, but also to preserve and maintain stable relations between subjects as actors. That is, the use of technology should, as in the previous case, comply with laws and regulations, but at the same time also not damage both people's communication with each other and their ability to interact with the technology itself. A deontological approach can be used here, but the focus of ethical expertise can only be the decision of a subject – a person or a group of people, but not technology. If we once again remember Kantian approach – a human should be seen not as means, but as an ultimate goal of an ethical behavior. The human here can act as an ethical subject capable of making ethically significant choices. Such choices can serve to «teach» a technology such as AI (for example, neural networks) in order to match its «actions» to the ideas of a real ethical subject about correct behavior. There is no need to use intuitionistic approaches here, just giving AI information as a mass of examples and, in fact, giving the conclusion from them to technology. This conclusion should be made by the ethical subject – a human him/herself and provided to the AI in a ready-made form.

A subject who is a technology user, and not necessarily a developer, may be able to independently «teach» AI his/her own ideas about correct behavior and choices. At the same time, there must be a strict ban on the behavior of AI that violates legislative and other documented norms, the situations of application of which the AI must be «trained» in advance to recognize. So ethical expertise of a technology capable of acting independently of a person is possible through its «training» to recognize situations of applying those solutions to ethical dilemmas that are provided by humans, and valuation of the results of such recognition. We believe that «ethical» technology is not the one that solves ethical dilemmas, but the one that recognizes the cases where to apply the solutions that humans considered to be the best ones. There is no need to try to create and apply some ethical theory that technology can use – even we as humans still argue about the possibility of the existence of such theories. And we should not try to avoid our responsibility of ethical decision-making by shifting this responsibility to technology. If we meet the case where there are no «good» solutions, but only «bad and even worse» as in the famous trolley problem [14], still some solution should be provided for technology, and not by it.

In the long term, an ethical examination of technologies capable of acting autonomously is hardly possible, but it is from long-term goals that medium- and short-term ones follow. For such a long-term perspective, for example, ethical codes in the field of AI can work – they have not yet been created for AI itself, but for directing the actions of developers and users. Ethical subjects build their goals in the long term, not always knowing with whom exactly and in what situations they will interact, that is, this is a subject-metasubject type of interaction, in which the focus of attention inevitably shifts from the goals and means of interaction to the subjects themselves. Ethical expertise here can only be carried out by subjects based on their own ideas about the desired vectors of their development. That is why the approach from the point of view of the virtue ethics seems to be the most productive here. This approach evaluates not technologies, but how human subjects would like to see themselves when, in order to achieve these goals of their own development and improvement, they choose the direction of technology development as a means.

Thus, raising the question of the ethics of technology (such as AI) itself, in our opinion, is premature not only for purely technical reasons. The ethics of AI, for example, is often understood as its ability to autonomously assess certain facts of reality [15], which raises questions about how to «teach» AI to do this – through the introduction of ready-made regulatory systems or through the gradual assimilation of large arrays of diverse information by analogy with the upbringing and training of a person, as P. Railton [16] writes. Nevertheless, ethical intuitionism [17], which is often relied on by authors who suggest «training» AI in the same way as teaching and educating a person, unfortunately does not provide any clear answers to the fundamental questions: on what basis we conclude who is capable of acting as an ethical subject, and why would such a subject commit ethically meaningful choices freely and consciously.

An approach based on aretology, could probably be more productive, since it considers the humans as ethical subjects par excellence, making a choice in favor of what they consider virtuous not only for the sake of an external goal, but also for the sake of self-improvement. Such an approach necessarily presupposes the presence of self-awareness and reflection in these ethical subjects, as well as recognition of it not only as a means, but also as a goal of ethical choices.

Human individuals, as well as highly organized stable social groups, can undoubtedly possess similar properties of an ethical subject, while any technology, including AI, will be deprived of these properties – and, perhaps, not only now, but also in the future, because otherwise it will not only have to acquire self-awareness but also to become the goal of ethical choices both for itself and for the rest of the actors [18]. Such a possibility seems very doubtful.

However, the well-developed technology, even the weak AI, even if it cannot become an ethical subject, is nevertheless able to make assessments based on existing social norms, regardless of how it received them – which puts it, firstly, in direct dependence on certain specific regulatory systems, ethical and legislative, secondly, it suggests that this technology needs to be «taught» to recognize the situations of using these systems. Ethical subjects (persons or groups of people) have the predominant ability to evaluate the norms [19, 20], since the subjects’ ability to improve depends on it, while technology, which is always a service and a means, not an end, is not at all obliged and should not have such an opportunity.

References
1. Jobin, A.; Ienca, M.; Vayena, E. (2020). The global landscape of AI ethics guidelines. Nature. 1 (9): 389–399. doi:10.1038/s42256-019-0088-2.
2. Gonzalez, W. (2015) On the Role of Values in the Configuration of Technology: From Axiology to Ethics, in Gonzalez, Wenceslao, ed., New Perspectives on Technology, Values, and Ethics, Boston Studies in the Philosophy and History of Science, Vol. 315, Springer, Cham, pp 3–27, https://doi.org/10.1007/978-3-319-21870-0_1.
3. Kroes, P., Meijers, A.W.M. (2016) Toward an Axiological Turn in the Philosophy of Technology, in Franssen, M., Vermaas, P., Kroes, P., Meijers, A., eds., Philosophy of Technology after the Empirical Turn, Philosophy of Engineering and Technology, Vol 23, Springer, Cham, https://doi.org/10.1007/978-3-319-33717-3_2.
4. Malakhova, E. V. (2022) The Axiology of Technology – on the Way to the Human Dimension of Complex Technical Systems, Voprosy Filosofii, Vol. 10 (2022), pp. 218–222. (in Russian)
5. Tsamados, A., Aggarwal, N., Cowls, J. et al. (2022) The ethics of algorithms: key problems and solutions. AI & Society 37, 215–230. https://doi.org/10.1007/s00146-021-01154-8.
6. Petrovsky V.A. (2008) Individuality, self-regulation, harmony. Moscow Psychotherapeutic Journal, No. 1, pp. 64-90. (in Russian)
7. Espejo R, Lepskiy V. (2021) An agenda for ontological cybernetics and social responsibility, Kybernetes, Vol. 50 No. 3, 694-710.
8. Lepskiy V.E. (2019) Challenges of the future and third-order cybernetics. Designing the future. Problems of Digital Reality: Proceedings of the 2nd International Conference (February 7-8, 2019, Moscow). — Moscow: IAM named after M.V.Keldysh — pp. 64-70. (in Russian)
9. Sinnott-Armstrong, W. (2021) Consequentialism. The Stanford Encyclopedia of Philosophy (Fall 2021 Edition), URL: https://plato.stanford.edu/archives/fall2021/entries/ consequentialism.
10. Briggs, R. A. (2019) Normative Theories of Rational Choice: Expected Utility. The Stanford Encyclopedia of Philosophy (Fall 2019 Edition), URL: https://plato.stanford.edu/archives/fall2019/entries/rationality-normative-utility.
11. Waller, B. N. (2005) Consider Ethics: Theory, Readings, and Contemporary Issues. New York: Pearson Longman.
12. Barrow J.M., Khandhar P.B. (2021) Deontology. StatPearls Publishing.
13. MacIntyre, A. (1985) After Virtue. London: Duckworth, 2nd Edition.
14. Jarvis Thomson, J. (1985). The Trolley Problem. Yale Law Journal. 94 (6): 1395–1415. doi:10.2307/796133.
15. Leben, D. (2018) Ethics for robots: how to design a moral algorithm. Abingdon, Oxon; New York, NY: Routledge.
16. Railton, P. (2020) Ethical Learning, Natural and Artificial. In Ethics of Artificial Intelligence, edited by S. Matthew Liao. New York: Oxford University Press, 2020. Oxford Scholarship Online. doi: 10.1093/oso/9780190905033.003.0002.
17. Artemyeva O.V. (2010) Intuitionism in Ethics (from the history of English ethical intellectualism of Modern Times). Ethical Thought. Issue 10. M., IPHRAS. pp.90-113. (in Russian)
18. Kingwell, M. (2020) Are Sentient AIs Persons? / in The Oxford Handbook of Ethics of AI, Edited by Markus D. Dubber, Frank Pasquale, and Sunit Das, Oxford University Press.
19. Kagan, M. S. (1997) Philosophical Theory of Value, St. Petersburg: Petropolis. (in Russian)
20. Shokhin, V. K. (2006) Philosophy of Values and Early Axiological Thought, RUDN University Publishing House, Moscow. (in Russian)

Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

The reviewed article is devoted to the extremely relevant socio-philosophical problem of the expertise of artificial intelligence technologies, which has arisen in front of mankind in connection with the rapid development of science. The need to address the ethical component of regulating the use of technology is due, in the author's opinion, to the fact that not all special cases are amenable to legal regulation, whereas moral consciousness, initially defining universal ethical norms as specific in itself, is able to indicate to the research community (and society as a whole) the leitmotif of attitude to the problems arising in connection with with the development of artificial intelligence technologies. (The author, however, supports the point of view of researchers who argue that the "payback" for such universalism of ethical regulation in comparison with legal regulation is the rejection of clarity and unambiguity of assessments, which, in the opinion of the reviewer, is not entirely accurate, since it should rather be about the inevitability of assuming responsibility by the subject for the decisions made, which is by no means equivalent to "ambiguity".) The stage of technology development, at which the practice of ethical evaluation of the operation of technical devices is really in demand, assumes that they achieve the ability to "perform actions independently of a person." However, the achievement of this stage of the "subjectivity of technology" implies the transfer of moral responsibility to its creator, who lays down a certain "algorithm" in its "behavior" (no matter how widely this concept is considered here). Awareness of the social significance of the problem of the emergence of new (quasi)It encourages the author of the article to set the task of developing "a methodological basis for the ethical examination of modern technologies, especially those based on the use of artificial intelligence." Thus, the author approaches the description of various options for understanding the role of the subject of activity, highlighting among them "subject-object interaction", "subject-subject interaction" and "subject-metasubject interaction", which means interaction with "a potentially unlimited number of other diverse subjects (individuals, groups) on a planetary scale." In this latter case, technology acts not only as a means, but also "acts on behalf of real, but not always well-known actors, imitating certain aspects of their activities" and influencing "the construction and maintenance of diverse social relations" on a global scale (since even the boundaries established for the dissemination of technology are inevitably removed). In the final part of the article, the author points out the connection between the approach he is developing to a new problem in the field of ethics with "classical" ethics – with a categorical imperative, according to one of the formulations of which a person should be considered not only as a means, but always as the goal of an action if it claims to be evaluated as a morally significant action (not limited to utilitarian motives). In this regard, the author speaks about the "training" of artificial intelligence as the task of the person creating it. At the same time, of course, responsibility continues to remain on the side of the person, the "teacher": "the ethical examination of technology capable of acting independently of a person is possible thanks to its "training" to recognize situations of applying those solutions to ethical dilemmas that are provided by people, and evaluating the results of such recognition." The technique as such, the author concludes, is designed not to solve "ethical dilemmas", but "to recognize cases of application of solutions that people consider the best." Finally, the author justifiably, in our opinion, argues that such an approach is applicable only in the case of "medium–term" prospects - it is impossible to foresee the long-term consequences of their activities when developing equipment and technologies, technology must remain "under the supervision of a person", its creator and "teacher". From the point of view of the approach developed by the author, it is not technologies that are subject to ethical assessment, but how a person would like to see himself, developing and improving certain technologies as a means of achieving the goals of human development proper. Both the analysis of the problem itself and the conclusions drawn on the basis of it seem justified and well-founded from a theoretical point of view. The analysis is undertaken on the basis of a wide range of sources, both domestic and foreign authors. The presentation is consistent, the author expresses his position quite clearly. The only remark (which, however, does not relate to the topic directly considered in the article, but to a "parallel" topic) is related to the understanding of the above-mentioned relationship between ethics and law, since the position taken into account by the author, according to which they differ in the breadth of their scope, is far from the only one in the history of philosophy and ethics. I am convinced that the article may be of interest to a wide range of readers, I recommend publishing it in a scientific journal.