Translate this page:
Please select your language to translate the article


You can just close the window to don't translate
Library
Your profile

Back to contents

Philosophy and Culture
Reference:

Prospects for Overcoming the Contradictions of the Development of Artificial Intelligence

Gluzdov Dmitry Viktorovich

ORCID: 0000-0001-7043-5139

Postgraduate Student, Department of Philosophy and Social Sciences, Nizhny Novgorod State Pedagogical University named after Kozma Minin

603950, Russia, Nizhny Novgorod, Ulyanova str., 1

dmitry.gluzdov@mail.ru
Other publications by this author
 

 

DOI:

10.7256/2454-0757.2023.4.40417

EDN:

TFBVZU

Received:

10-04-2023


Published:

01-05-2023


Abstract: The subject of this study is a set of alleged contradictions in the development of artificial intelligence, pursued in order to achieve their overcoming. Philosophical anthropology contains the potential to analyze complex interactions, to articulate the problems that arise between artificial intelligence and humans. The philosophical and anthropological analysis of artificial intelligence is aimed at understanding this human phenomenon, human presence and its experience. The article is an attempt to identify and outline the trajectories for the possible resolution of the contradiction in the development of artificial intelligence, the analysis of the parameters proposed by philosophical anthropology to solve their problems in the modern socio-cultural situation. The relevance of the proposed problem arises in the fact that artificial intelligence qualitatively changes the existence of a person, and in philosophical anthropology there is a potential to outline the prospects for the development of artificial intelligence, while preserving the languid human in a person. To study this perspective is very important in society. The novelty of the topic raised in the article is the analysis of solving the problems of artificial intelligence from the standpoint of philosophical anthropology, in the articulation of contradictions in the development of artificial intelligence, in the search for solutions to overcome these contradictions. Philosophical anthropology provides tools for a balanced decision on the development of artificial intelligence, the degree of its impact on human existence.


Keywords:

philosophical anthropology, human, artificial intelligence, technology, contradictions, consciousness, free will, dialogue, ethics, interdisciplinary cooperation

This article is automatically translated.

 

IntroductionArtificial intelligence (hereinafter referred to as AI) is rapidly changing many aspects of different spheres of human life: from healthcare and education to transport and entertainment.

Although its potential benefits are enormous, there are also significant risks and challenges associated with its design, development, implementation, use and impact on humans and society. These problems range from technical and ethical problems to social, political, economic and environmental problems, and a comprehensive and interdisciplinary approach is required to solve them. Philosophical anthropology, as a branch of philosophy that seeks to understand human nature and the foundations of its existence, is able to analyze complex interactions between artificial intelligence and humans. The research presented in this article is aimed at studying the prospects of overcoming contradictions in the development of AI from a philosophical and anthropological point of view, by analyzing key concepts, challenges and opportunities arising in this context.

The subject of this research is indicated in the title of this article and includes a set of potential contradictions in the development of artificial intelligence, considered for the possibility of overcoming them. When considering the issue, we understand the presence of some vagueness in the content of the term "artificial intelligence" and in order to more accurately delineate its boundaries, here we accept any systems under this concept, including their functionality, which provides tools and technologies for imitating a person and the intellectual tasks performed by him, which have always been perceived as abilities inherent only to a person. From this point of view, the term "artificial intelligence", corresponding rather to "strong" AI, is absolutely not equivalent to the concept of "artificial intelligence" ("weak" artificial intelligence).

Over the past two decades, the focus of research has shifted markedly. Previously, most researchers saw AI problems in the question of the possibility of creating an artificial analogue of natural (human) intelligence "in the image and likeness". In the current state, the problem of artificial intelligence is more extensive and is a set of a wide variety of problems and tasks related not only to intelligence, modeling of the psyche, thinking, but also to object recognition capabilities, production automation, individual intellectual tasks (for example, with the proof of theorems, the allocation of the content of texts). Philosophical research carried out in this field mainly concerns the identification of the philosophical foundations of artificial intelligence, possible issues of interaction between human and artificial intelligence, the assessment, unfortunately, mainly of the negative consequences of the influence of AI on humans and society, etc. Issues related to overcoming the contradictory nature of the influence of the object of study were rather identified, but not investigated on the subject of the search for proposals for their solutions. For this reason, we believe that these studies are only at the beginning of their journey and the situation requires the formulation of the problem, its clarification, additional questions, search and suggestions for their solution. The task considered here is an attempt to identify the shortcomings of the existing situation, to determine the target based on the results of the philosophical and anthropological analysis carried out earlier and to assess possible prospects. Approaching the study from these positions, in the sources considered in the process, there were no materials that comprehensively consider the problem, combining both the task of identifying the causes and grounds of these contradictions and their analysis from the standpoint of philosophical anthropology, and the involvement of philosophical anthropology determines the novelty of the study.

Hypothetical-deductive and dialectical methods in this work are the main ones in methodological terms. But also traditionally, when a researcher studies the texts of scientific sources, the hermeneutic method is used, which, together with the phenomenological approach, are most likely the most frequently used in the case of philosophical research.

Philosophical Anthropology and artificial intelligenceOne of the modern issues of philosophical anthropology is the relationship between people and machines or technologies, and this is especially relevant in relation to AI systems.

In the process of how the capabilities of artificial intelligence are becoming more and more perfect, questions related to the nature of human intelligence and consciousness and how natural intelligence correlates with artificial intelligence are increasingly on the agenda [1, 2, 3].

Philosophers argue about the nature of the human mind and whether machines can reproduce human intelligence, consciousness and emotions [3, p.19-20] [4, p.33]. Some argue that humans have unique qualities, such as creativity, empathy and moral thinking, that cannot be reproduced by machines, while there are opposing opinions claiming that these qualities are not so unique to humans and are available, for example, to other animals, and can be modeled using AI.

Philosophical anthropology raises ethical questions about the use of AI in society and its potential impact on human life [5, 6]. For example, issues such as the responsibility of AI for its actions, the impact of automation on employment and the ability of AI to exacerbate social inequality and reinforce prejudice/misconceptions are relevant ethical issues.

Taking man as a basis, philosophical anthropology is able to offer a number of concepts, ideas, criteria relevant to the prospects of overcoming the contradictions of the development of artificial intelligence. Among them , first of all , the following can be distinguished:

·         Human nature: One of the key philosophical concepts relevant to the development of AI. It refers to the innate qualities and characteristics that define what it means to be human. As AI evolves, questions arise about whether machines can reproduce human nature or whether machines can surpass human nature. For example, can machines have emotions or intuition that are traditionally associated with human nature? As a consequence, the question arises about the limits of the capabilities of machine intelligence and to what extent AI can replace human abilities and competencies.

·         Intelligence is a philosophical concept that occupies a central place in the development of AI, meaning the ability to acquire and apply knowledge and skills. Initially, artificial intelligence systems are designed to simulate human intelligence, but as a result, questions arise about the nature of machine intelligence. For example, can machines really understand the meaning of words or are they just processing patterns? Can machines develop real creativity or are they limited to reproducing existing patterns? And the criterion or pattern here is a person.

·         Conscience. Another key philosophical concept that is actually directly related to the development of artificial intelligence. Consciousness refers to the subjective experience of being aware of one's surroundings, thoughts and feelings. The development of AI raises questions about whether machines can be conscious. Some argue that consciousness is a product of the brain, and therefore machines cannot be conscious. Others suggest that consciousness may arise as a result of complex computational processes, and, consequently, machines may someday acquire consciousness. We cannot conceive of a non–human consciousness and, apparently, only a human being - his consciousness - can be a model for us here, too.

·         Autonomy is the ability to act independently and make decisions without external influence. Its presence in an object or phenomenon indicates the presence of the property of "autonomy – immanent own laws of existence and development" [7]. As AI systems become more sophisticated, they can become more autonomous. This raises questions about the role of a person in the decision-making process [8]. For example, who will be responsible if an autonomous AI system makes a decision that has negative consequences?

·         Ethics and values are philosophical concepts that, apparently, should occupy a central place in the process of analyzing the development of AI. For example, we need to understand how to ensure that AI systems are developed, developed and used in compliance with ethical standards and in accordance with human values? How can we prevent the misuse of AI systems for malicious purposes?

·         Moral responsibility is the idea that individuals and organizations should be held accountable for the consequences of their actions. In the context of AI development, this means that the development and application of AI systems should be determined by those responsible for the ethical consequences of their technology. This includes considerations such as the impact of AI on employment, possible bias and discrimination, as well as misuse [9].

·         Transparency for human understanding means that in the case of any development and application of AI systems, they must be interpreted and understood by a person in terms of the decisions that the system makes, so that interested parties can assess the ethical consequences of these decisions.

Each of these points is a source of contradictions in the development of artificial intelligence. Each of these points is most directly tied to a person. Within the framework of each of these points, it is possible to predict contradictions in relation to artificial intelligence and humans and look for approaches to resolve them, which means that philosophical anthropology should be an active participant in this work.

Paradoxically, one of the main problems of philosophical anthropology is the definition of what it means to be human. There has been a long-standing debate among scientists about what it means to be human, with different schools of thought emphasizing different aspects of human nature, such as rationality, emotions, embodiment, and social relationships. Defining what it means to be human is essential for the development of AI because it can help ensure that AI systems are designed in a way that is consistent with human values.

In order to solve the identified and potential problems with AI, using the tools of philosophical anthropology, it is necessary to integrate ideas with different fields of knowledge, approaches, positions, points of view, among which will be technology, engineering, law, social sciences, etc. This requires cooperation beyond the disciplinary and sectoral boundaries of individual sciences, which can be difficult to implement. But despite the difficulties of interdisciplinary interaction, the involvement of philosophical anthropology opens up two-way opportunities, including the possibility, if not in rethinking, then at least in developing and expanding our understanding of the place of man and his relationship with technology. Reflecting on the nature of man, as well as his place in the world, a philosopher can (rather should!) participate in the elaboration of issues of the development of artificial intelligence systems, in order to project into them a more careful attitude towards ourselves, our environment and giving preference to the well-being of people, thus preventing the solution of only narrow economic or technological interests.

Literature reviewWithout going beyond the boundaries outlined by philosophical anthropology, as well as remaining within the question of intelligence from the point of view of the problem of philosophy of consciousness, we can conclude that the question of the nature of human consciousness in European philosophy arose constantly.

In Modern times, questions have been actively raised about both character and thinking itself. At this time, natural human intelligence is compared with a mechanism – with a machine – which is not surprising, since this is an era when the explosive development of mechanics as a science left its mark on thinking, and intelligence was compared with a clock as the most developed and studied mechanism at that time. We can find a similar comparison in various philosophers of Modern times, and first of all in Rene Descartes, Gottfried Leibniz and Julien Ofre de Lamettri.

Turning to the modern formulation of the problem of consciousness, it should be noted that such Western philosophers as K-O. dealt with this issue. Apel, D. Armstrong, S. Blackmore, D. Dennett, W. James, Saul Kripke, W. Van O. Quine, K. McGuinn, T. Nagel, J. Austin, Dev. Papineau, J. Passmore, S. Pinker, Charles S. Pierce, W. Place, K. Popper, G. Ryle, B. Russell, J. Searle, W. Smart, P. Stroson, J. Fodor, G. Frege, Y. Habermas, N. Chomsky, D. Chalmers, J. Eccles, A. Elitzur. Similar issues concerning primarily natural intelligence and consciousness have been studied and are being studied by our domestic philosophers such as A. Y. Alekseev, V. V. Vasiliev, D. B. Volkov, I. G. Gasparov, V. V. Gorbatov, D. I. Dubrovsky, A.M. Ivanitsky, D. V. Ivanov, V. A. Lectorsky, S. F. Nagumanova, Yu. V. Orpheev, V. I. Samokhvalova, A. G. Spirkin, V. S. Tyukhtin, T. V. Chernihiv, B. G. Yudin, N. S. Yulina.

The problems of artificial intelligence are reflected in the works of many foreign scientists, including M. Arbib, J. Weizenbaum, S. Dreyfus, X. Dreyfus, J. McKinsey, X. Putnam, R. Penrose, B. Rosenblum, A. Turing, R. Schenck. Undoubtedly, domestic scientists and philosophers, including A. P. Alekseev, A. Y. Alekseev, I. Y. Alekseeva, V. V. Vasiliev, D. B. Volkov, D. I. Dubrovsky, A. F. Zotov, V. A. Lectorsky, A. P. Ogurtsov, Yu. V. Orpheev, V. I. Samokhvalov, N. M., do not bypass this topic. Smirnova, A. G. Spirkin, V. S. Tyukhtin, N. S. Yulina.

The contradictions of artificial intelligence in one form or another were directly considered in their works by N. Winner, N. Bostrom, D. Dennett, X. Dreyfus, P. Norvig, S. Russell, D. Searle. Among domestic scientists and researchers, one can distinguish such as A. Y. Alekseev, D. I. Dubrovsky, E. V. Ilyenkov, V. A. Kutyrev, A.L. Lectorsky, Yu. Yu. Petrunin. Logical problems of artificial intelligence are also considered in the works of I. Y. Alekseeva, S. L. Katrechko. A fairly extensive range of informatization issues, in which the topic of artificial intelligence is only a part, is considered in the works of J. Weizenbaum, N. Wiener, V.A. Zvegintsev, K.A. Zuev, G. L. Smolyan, A. I. Rakitov.

The study of sources gives reason to believe that there is no completeness and integrity of the picture in the scientific literature regarding the issue raised in the topic of this article – about the prospects for overcoming the contradictions of the development of artificial intelligence. As a consequence, it can be considered possible to make an assessment about the insufficient degree of elaboration of the problem formulated in the topic of the article. Researchers reveal individual aspects of the problem or pay attention to individual contradictions, developing them, generalizing, analyzing either from the position of society, or insufficiently focusing on the human problem, most often aggravating a separate side of the contradictions, for example, the negative component. It can also be noted that in the works devoted to this topic, the authors do not always have a unified position in terminology – the concepts of artificial intelligence, systems with elements of artificial intelligence and artificial intelligence are involved in the philosophical discourse.

Contradictions in the development of artificial intelligenceThe development of artificial intelligence is characterized by a number of contradictions arising from the interaction of various factors and subjects involved in this process.

These contradictions can be detected and analyzed by reviewing the literature and empirical data from various fields, such as computer science, economics, sociology, ethics and politics.

One of the main contradictions is the potential loss of a person's free will. As AI systems become more sophisticated, they can begin to make decisions and act independently, without human involvement. The contradiction lies in the fact that focusing on ensuring human well-being or health, the use of artificial intelligence can have negative, negative results [10, p.119].

Examples of contradictions include the following: the contradiction between efficiency and creativity, the contradiction between predictability and adaptability, the contradiction between confidentiality and security, the contradiction between human control and machine autonomy, economic and contradictions between global cooperation and national competition, as well as technical and technological contradictions [11, 9, 12, 13, 14]. A separate article can be devoted to identification, enumeration and classification, and we will not delve into this topic within the framework of this work.

Prospects for overcoming contradictions in the development of artificial intelligenceMartin Heidegger's answer to the question: "Is technology the main danger for a person?" [15, p.149] contains both the inconsistency of our topic and the answer to the possibility of overcoming.

And he answered with the words of Friedrich Gelderlin: "Where there is danger, there is salvation."

The fascination with technology and the avoidance of the need to understand human nature leads to the fact that people eventually begin to be considered as a tool of production or an object of consumption. This leads to a distortion of human values and a crisis in culture. Artificial intelligence also becomes the tool that leads a person into a world where he becomes only an object of consumption, where there is a distortion of human values. This is the reason why philosophical anthropology should set the starting points for the development of specific sciences. The elimination of contradictions in this development with the participation of philosophical anthropology involves the application of conclusions and views from its field of research in the field of AI discussion. We propose some approaches using which philosophical anthropology can help overcome possible contradictions:

·         The study of human nature. The involvement and involvement of scientists of applied sciences through philosophical anthropology in the study of understanding (or, more correctly, the search for understanding) of human nature can help in the human orientation of AI and, consequently, influence its development. Moreover, realizing the uniqueness and peculiarities of a person, we can identify potential areas of contradictions at any stage in the development of AI systems, which gives us the opportunity to form a list of problems that need to be worked with in advance.

·         A guide to ethical norms and consequences: Philosophical anthropology can help to identify the potential ethical consequences of the development of AI systems. Applying already existing ethical principles and values, we are also able to identify potential areas of contradictions in the development of AI and work to mitigate or eliminate them.

· Cultural context processing: Philosophical anthropology is able to determine the social and cultural context in which AI systems are being developed, and how this should affect the development of these systems. Being aware of social and cultural factors, we can also identify potential areas of AI contradictions and work on leveling them. By studying the fundamental principles underlying existing values, philosophers can gain a deeper understanding of what is important for people of different cultures. This is the case in cases where, for example, in some cultures a lot of attention is paid to the family and community, as opposed to those where individual autonomy is highly valued. This can help identify common ground and areas of potential agreement even in the face of seemingly irreconcilable differences.

·         Human Well-being Orientation: This is an approach that focuses on the positive impact of AI on human well-being. Artificial intelligence systems can be used to improve healthcare, education and other areas of human life. This can create new opportunities for the prosperity of humanity and help us solve pressing social problems.

·         Orientation to a person, his experience and values: requires the promotion of requirements not only in relation to existing norms accepted as mandatory, but with the adaptation of the design, interface and ways of AI interaction. This interaction should focus on the convenience of human perception, as well as on the cooperation of man and artificial intelligence with a human–favored regime - the machine is not a participant in the relationship, but a tool. This means focusing on human preferences, emotions, experiences, values (empathy, compassion, justice), without creating new, futuristic, untested approaches. This will orient the development of AI systems towards a better response to the preferences and experience of the audience, which is more motivating for cooperation and joint creativity of humans and artificial intelligence.

·         Strengthening human control and from the point of view of human values: there must be control in the design of artificial intelligence systems, so that the possibility of control and the right to make final decisions by a person at any stage is preserved in principle – not only during development and implementation, but during operation.

· Focusing on the ultimate goal in AI development: Philosophical anthropology can help determine the ultimate goal of technology development, which can also be a guideline. By comparing a person's goals and values with the goals and values that guide the development of AI, we can immediately identify potential areas of contradictions.

The importance of interdisciplinary collaborationContradictions arising in connection with the development of artificial intelligence cannot be solved only by a separate science (unless these contradictions are purely technical in nature).

An integrated approach is required, offering a combination of technical, social, economic and political measures, as well as ethical and cultural ideas and models. To eliminate the identified contradictions, it is possible to apply some solutions or strategies peculiar to philosophical anthropology, which are mentioned above (other contradictions and ways to eliminate them, such as the establishment of global standards and norms for management, technical, political and economic measures, etc., are not the purpose of this article). Philosophical anthropology, taking into account the approaches proposed above, can play a key role in overcoming the contradictions of AI under consideration. Philosophical anthropology provides tools for informed decision-making about the development of AI, the degree of its interference in human existence.

The development of artificial intelligence is inherently an interdisciplinary approach. The ongoing involvement of philosophy, ethics and social sciences only strengthens this thesis. Moreover, all of the above confirms that this kind of cooperation is fundamentally necessary, since only human– and society-oriented sciences can provide the required benchmark in development - something that no engineering science can provide. And here philosophy can play a key role with, for example, ethics in its arsenal. Ethics specialists can help identify and solve ethical AI problems, as well as ensure the development and use of AI systems in accordance with our ethical and social values. Moreover, ethics specialists can help raise public awareness and involve them in the ethical implications of the development and implementation of artificial intelligence in our lives.

Social sciences also play an important role in understanding the social and political aspects of AI development. Sociologists, anthropologists, and political scientists can help study the social and cultural implications of AI development.

Limitations of the proposed approachesAlthough the proposed approaches can eliminate the identified contradictions and provide a more harmonious and human-oriented approach to the development of AI, they will still face a number of limitations.

These include the complexity of implementation in practice, the possibility of potential conflicts between different solutions, the uncertainty of the long-term consequences of the use of AI and the need for constant assessment, adaptation and improvement. Moreover, the proposed options may simply not be enough to solve all the contradictions and problems, and they may need to be supplemented with other approaches and perspectives.

In order to improve our understanding and applicability of philosophical anthropology to AI issues, there are several areas for research, including for application within the framework of the cooperation proposed above. These include the following:

· further study of the consequences of AI development related to the nature of consciousness, activity, responsibility and identity;

· Exploring the ethical and social implications of AI in specific areas such as health, education, employment, and security;

· solving issues of integration into the work of various points of view and disciplines, such as computer science, neurology, psychology, sociology, philosophy and cultural studies;

· interaction with various stakeholders and the public, for example, with politicians, industry leaders, civil society organizations and citizens.

ConclusionIn our opinion, philosophical anthropology has powerful tools for identifying, evaluating and overcoming contradictions and resolving conflicts.

By providing a deep understanding of what it means to be human, as well as being able to formulate criteria that evaluate performance, philosophers can help bridge the gap between different cultures and societies and develop more effective solutions to problems.

We investigated the prospects of overcoming contradictions in the development of artificial intelligence from a philosophical and anthropological point of view. Identified the basic approaches possible to use for the solution. Our analysis highlights the need for a more holistic, interdisciplinary and values-based approach to AI development that combines technical, social, economic, political and ethical measures and prioritizes human-centered and responsible innovation. We argue that philosophical anthropology can make a valuable contribution to this approach by providing critical reflection, normative guidance, and interdisciplinary collaboration.

References
1. Lektorsky, V. A. (2017). Philosophy, artificial intelligence, cognitive research. Materialy Vseros. interdisciplinary. conf., dedicated the sixtieth anniversary of research. arts. Intelligence (Moscow, March 17-18, 2016). P. 87-94. Moscow: IIntell.
2. Rozin, V. M. (2023). Two concepts of artificial intelligence: realistic and utopian. Philosophical Thought, 2, 102-114. doi:10.25136/2409-8728.2023.2.39739 URL: https://nbpublish.com/library_read_article.php?id=39739
3. Gluzdov, D. V. (2022). Philosophical and anthropological foundations of the interaction of artificial and natural intelligence. Bulletin of the Minin University, 10(4), 15. doi:10.26795/2307-1281-2022-10-4-15
4. Yakovleva E. V., & Isakova N. V. (2021). Artificial intelligence as a modern philosophical problem: an analytical review. Humanitarian and social sciences, 6, 30-35. doi:10.18522/2070-1403-2021-89-6-30-35
5. Roche C., Wall P. J., & Lewis D. (2022). Ethics and diversity in artificial intelligence policies, strategies. AI and Ethics. Springer Nature. doi:10.1007/s43681-022-00218-9
6. Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence-Volume 1, November, pp. 501-507. doi:10.1038/s42256-019-0114-4
7. Mozheiko, M. A. & Sudakov, A. K. (2022). Autonomy. Article of the Humanitarian Portal-2022-[Electronic resource]. Retrieved from https://gtmarket.ru/concepts/7323
8. Lektorsky, V. A. (2021). On the autonomy of artificial intelligence, freedom of choice and responsibility. "Scientific Russia"-2021-[Electronic resource]. Retrieved from https://scientificrussia.ru/articles/akademik-ran-valektorskij-ob-iskusstvennom-razume-i-novyh-ugrozah
9. Preparing for the Future of Artificial Intelligence. 2016. Washington, DC. Executive Office of the President National Science and Technology Council Committee on Technology-2016-[Electronic resource]. Retrieved from https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf
10. Gluzdov, D. V. (2021). The problem of human freedom and identity in the context of the development of artificial intelligence. Revolution and evolution: models of development in science, culture, society: Proceedings of the III All-Russian Scientific Conference. Under the general editorship of A.M. Feigelman. [Electronic resource]. Moscow: Publishing House "Russian Society for the History and Philosophy of Science", pp. 117-120. Retrieved from http://www.rshps.ru/books/rev-evo-2021.pdf
11. Korobkov, A.D. (2021). The impact of artificial intelligence technologies on international relations. Bulletin of MGIMO-University, 1-25. doi:10.24833/2071-8160-2021-olf1
12. Thomas, S. (2023). AI is the end of writing. The computers will soon be here to do it better. The Spectator-11 March 2023-[Electronic resource]. Retrieved from https://www.spectator.co.uk/article/ai-is-the-end-of-writing/
13. Afanasievskaya, A.A. (2021). Legal status of artificial intelligence. Bulletin of the Saratov State Law Academy, 4(141), 88-92. doi:10.24412/2227-7315-2021-4-88-92
14. Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
15. Heidegger M. (1991). Conversation on a country road: Collection: Per. with him. Ed. A. L. Dobrokhotova. Moscow. Higher. school.

Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

The reviewed article examines the possible negative consequences of the widespread introduction of technical systems into the practice of production, communication and management, which together can be referred to as "artificial intelligence". The author connects these consequences with the processes of changing social and interpersonal relations, maintaining the tradition of reproducing moral values in society, and, most importantly, ensuring a safe human existence. Despite the fact that there have been many publications on this issue in recent years, it should be recognized that the article not only repeats some of the already familiar provisions, but also identifies new aspects of the topic, in particular, the author justifiably points out the paramount importance of taking into account the social context in the process of developing and implementing artificial intelligence systems. He emphasizes the danger of contradictions between the desire of the state and business to improve labor efficiency and the need to respect the responsibility to society of specialists who develop and operate artificial intelligence systems. The author expresses the philosophical content of his reflections in the position that philosophical anthropology, as a discipline that directly takes into account the specifics of "human nature" and measures with it the innovations brought to life by scientific and technological progress, can play an important constructive role in identifying and resolving contradictions arising in connection with the development of artificial intelligence. Despite the general rather high assessment of the article, it is impossible not to mention some of its shortcomings. Thus, significant complaints are caused by the syntax of the construction of many sentences and the style of narration. Let's look, for example, at the following sentence: "The subject of this study ... includes a set of potential contradictions ... considered for the possibility of ...". How could you not notice that the "subject" occurs twice in the sentence?! Unfortunately, there are a lot of similar fragments left in the text. Or a little further: "... in order to more accurately delineate its (artificial intelligence, – the reviewer) boundaries, here we accept any systems, including their functionality, under this concept ...". You can talk about "systems", you can talk about their "functionality", but you cannot talk about "functionality" in relation to "systems", the "consistency" of the first and the "ideality" of the second do not allow them to be reduced into one syntactic construction! Further, both in the title and in the text itself, I would like to recommend changing "contradictions in the development of artificial intelligence" to "contradictions in the development of ...". Some features of the text design are also unjustified, first of all, the allocation of provisions that do not act as subheadings in the text, this prevents the reader from seeing the structure of the narrative. Finally, the conclusion seems frankly weak, there is non-trivial content in the article itself, which should be summarized in conclusion. Nevertheless, it seems that there are still more positive features in the reviewed article, in connection with which it is possible to recommend the article for publication, and the author will still be able to work on eliminating the noted shortcomings in a working manner.