Translate this page:
Please select your language to translate the article


You can just close the window to don't translate
Library
Your profile

Back to contents

Man and Culture
Reference:

Cultural and philosophical foundations of artificial intelligence as a cultural phenomenon

Belikova Evgeniya Konstantinovna

ORCID: 0009-0001-7575-024X

PhD in Cultural Studies

Associate Professor; Department of English; Lomonosov Moscow State University

119991, Russia, Moscow, Leninskie Gory str., 1, building 52

jkbelikova@yandex.ru
Other publications by this author
 

 

DOI:

10.25136/2409-8744.2024.4.71324

EDN:

PDQELM

Received:

23-07-2024


Published:

02-08-2024


Abstract: The object of research is artificial intelligence (AI); the subject of the study is the basis of its development by representatives of the philosophy of culture. It is noted that the philosophical understanding of technology, technology and AI began much earlier than artificial intelligence was created as a technological phenomenon, which indicates the essence of AI as a cultural phenomenon. Since modern times, representatives of the philosophy of culture have tried to comprehend AI, and their theoretical constructions today often look prophetic, which have found their full confirmation in our time. The earliest perspective of understanding the culture of AI was the internalization one (evaluation of human experience in relation to technology), which was then supplemented by the phatic (understanding of the possibility of communication with AI) and critical (criticism of interaction with technology and AI). The research was carried out using general scientific methods of analysis and synthesis, observation, description, etc. Special methods were used: systemic-structural, dialectical, cultural-historical. The main approach to the problem has become interdisciplinary. The scientific novelty lies in identifying different angles of understanding the culture of AI in the history of cultural and philosophical thought. It is noted that the internalization perspective is most clearly manifested in the works of Hobbes, Leibniz, Descartes, Spinoza, Hume and other scientists seeking to understand the similarities between human and machine thinking, the possibility of repeating natural intelligence in artificial intelligence. A phatic perspective on understanding the culture of AI is characteristic of the works of Rickert, Toffler, Derrida, Barthes, Foucault and other philosophers who focused attention on the nature of the relationship between man and machine. A critical analysis of AI is manifested in the studies of Berdyaev, Heidegger, Sombart, Bell and other authors who talk about the dangers associated with AI.


Keywords:

artificial intelligence, technique, technology, philosophy of culture, cultural-philosophical foundations, cultural phenomenon, interiorization perspective, phatic perspective, critical perspective, communication with AI

This article is automatically translated.

Introduction

Artificial intelligence (AI) as a technological phenomenon emerged in the second half of the twentieth century. After the beginning of its development by representatives of natural science disciplines within the framework of computerization and digitalization, however, as a cultural phenomenon, paradoxically, it has a much richer and longer history. The fact of the development of the problem of the emergence of AI, its relationship with the essence of man, its role in society, which began long before its invention by scientists, cannot be accidental and deserves careful analysis. AI culture attracted a wide variety of researchers, philosophers and cultural scientists, even at a time when artificial intelligence itself did not exist yet. Based on the analysis of the essence of man, his consciousness and thinking, philosophers expressed thoughts about science, technology and technology, which often sounded like predictions. Aristotle also spoke about the creation of creatures that could be built like a human and would be able to match and replace a human being in his "Logic" [1]. Syllogical logic, created by the ancient Greek philosopher, remains one of the key AI strategies today. However, in the philosophical studies of Modern times, the Enlightenment, and postmodernism, such research has become especially substantive and promising.

Literature review

Modern scientists are increasingly turning their attention to the constructions and conclusions made by philosophers of previous times. The search in the works of philosophers of previous centuries and decades for the cultural and philosophical foundations of a particular phenomenon, including AI, allows us to better understand this phenomenon. Modern philosophers and cultural scientists read the works of the classics with new eyes and see in them many threads leading to AI. Alekseeva reveals that the computer image of human knowledge has long been the subject of philosophical research, in particular, she speaks about the prophetic predictions of T. Hobbes' AI [2, p. 184]. S. G. Ushkin states that T. Hobbes in the book "Leviathan" predicted the revolution of artificial intelligence and big data [3]. J. Hogeland calls T. Hobbes is the "grandfather of artificial intelligence" [4, p. 23]. N. Yu. Klyueva, among G. V. Leibniz's ideas that influenced the development of computer science and research in the field of artificial intelligence, highlights the scientist's desire to "reduce all scientific reasoning to mathematical calculations" [5, p. 82]. I. D. Margolin and N. P. Dubovskaya say that in the writings of B. Spinoza, a "hypothesis about a physical symbolic system" is formulated which will become the basis for research in the field of artificial intelligence" [6, p. 23]. K. M. Laufer, having studied B. Spinoza's position on the social essence of man, states that the philosopher denied the possibility of a complete repetition of the human mind by a machine incapable of social interaction [7, p. 224]. V. N. Bryushinkin [8] and D. P. Kozolupenko [9] see signs of AI, theoretical foundations for research in the field of AI in the model of the world constructed by I. Kant. A.V. Mikhailovsky characterizes V. Sombart as a critic of technology [10]. V. E. Kostryukov refers to the theory of the "death of the author" developed by postmodernists and connects it with the idea of defining the boundaries of a person and a neural network [11]. Despite the presence of a significant number of works in which modern philosophers find prerequisites for the development of AI in the works of philosophers of previous periods of scientific development, no holistic study has yet been conducted in which the cultural and philosophical foundations of AI as a cultural phenomenon would be systematized.

The purpose of the article is to identify the cultural and philosophical foundations of AI, to determine the stages of scientific development of this phenomenon in the period when it has not yet received modern technological design.

The main part

The forerunner of the construction of AI culture, attempts to substantiate it in a culturally philosophical sense can be considered the research of many philosophers of the period when AI technologically did not yet exist. There are several perspectives of cultural and philosophical understanding of AI culture, of which the most significant are interiorization, phatic, and critical.

The interiorization perspective of understanding AI culture

The interiorization perspective is chosen when philosophers try to evaluate human experience in relation to technology and technology, highlight aspects of the practical coexistence of man and technology, assess the essence of artificial intelligence, which will once be created on the model of natural, to figure out whether it is capable of completely repeating human consciousness, how it will affect culture.

The visionary image of artificial intelligence coexisting with man and striving to dictate rules to him, providing protection for this, is created by T. Hobbes in Leviathan. The huge Leviathan depicted by the author "retains its resemblance to the natural one", since the processes in its body are carried out according to the same rational laws as those of the natural one [12, p. 269]. G.V. Leibniz, who believed in the power of science and technology and sought to "come up with a certain alphabet of human thoughts" [13, p. 414], which is very similar to modern programming languages, speaks about the rational structure of human thinking and the possibility of its reasonable description. R. Descartes considers human thinking as a completely material phenomenon based on universal models embedded in intelligence. According to R. Descartes, thought is decomposed into "operations of our intellect" [14, p. 182], and this process can be performed as many times as necessary to solve any most difficult problem. Descartes' approach to thinking as a set of operations, and to man as a machine ("God created our body like a machine and wished it to act as a universal tool" [Ibid., p. 469]) proves that the creation of a machine thinking like a man is quite possible. B. Spinoza also believes in the boundless power of reason, who understands man as a unity of the bodily, mental and social and is confident that the human mind is constantly being improved, and as an operational system: "... By his natural power, he creates mental tools for himself (instrumenta intellectualia), from which he acquires other powers for other mental works, and from these works other tools" [15, p. 329]. I. Kant speaks about modeling the process of cognition, whose reflections on reason, thought, thinking lead the author to the idea of the possibility of autonomous thinking existing outside of man, about the "schematism of the mind" [16, p. 127], the division of thought into maxims (operations), etc.

In the philosophy of Enlightenment, the interiorization perspective of understanding technology is found in the works of D. Hume, who spoke about "atomic impressions", which make up more complex mental compositions. D. Hume compares man, animals, and the whole world with machines, talking about their similar structure and similar causes (grounds for occurrence) [17, p. 400]. The image of a machine helps a scientist to comprehend all the phenomena of the world, living and inanimate, and he comes to the idea of an "artificial machine", the sources of which he calls "reason and premeditation" [Ibid., p. 430].

The interiorization approach was the first in the understanding of AI by scientists who needed to adapt to a world gradually filled with technology and technology, explain the complex connections between humans and machines, analyze and describe the first experience of perceiving technology and technology, and predict further developments.

This perspective of the cultural and philosophical understanding of AI remains the main one in the further development of philosophical thought, but at the same time it is complemented by other perspectives that demonstrate the desire of scientists to understand the essence of technology and technology in more depth and detail.

A fatalistic perspective on understanding AI culture

Gradually, a person realizes that in the perception of technology and technology, a very important point is the establishment of relationships with them, the organization of communication, communication. This forms a phatic perspective on the perception of technology and AI. Philosophers need to understand, firstly, whether this communication is possible (or whether a person and a machine are so different from each other that it is unrealistic), secondly, what it will be (if possible), and thirdly, whether it will be conducted on equal terms, that is, whether it is conceivable to create a machine commensurate with to the mind, according to consciousness with a person. These issues are solved in different ways.

The neo-Kantian G. Rickert speaks about the insurmountable boundary between the natural and the artificial, between the humanitarian and the technical, forming a point of view characterized by "Rickert's pessimism". The main difference between these spheres, according to G. Rickert, respectively, is the ability and inability to express individuality. The creation of artificial intelligence is questioned by the philosopher, since natural sciences "are unable to introduce reality into their concepts of individuality" [18, p. 112].

The issues of dialogue with machines are becoming especially relevant for postmodern philosophers who observe the formation and development of post-industrial society and the expansion of the use of machines primarily due to their information function. Philosophers make very interesting observations, many of which are also visionary.

Thus, E. Toffler in the work "The Third Wave" (1980) predicts the individualization and personification of virtual space, their focus on "personal information requests" [19, p. 560], which we observe today with the advent of artificial neural networks. D. Bell speaks of AI as a phenomenon with which a person correlates his "I", trying to realize myself in comparison with the machine [20].

The concept of the "death of the author", which is important for postmodernism, correlates with the idea of AI. J. Derrida notes that the author (subject) in the modern world loses its uniqueness, uniqueness, blurs, loses its "point simplicity" [21, p. 288]. R. Barth declares the elimination of the author at all levels of the text and its replacement by a scriptwriter which does not exist outside the text, which does not possess any individual traits, emotions, moods, ideas, creative potentials. The scriptwriter creates the text, especially without thinking and very controversially, using "an immense dictionary from which he draws his writing, which does not know a stop" [22, p. 389]. The main thing for the text is not the author, but the reader, which is quite consistent with the modern way of authorship of AI, which is in the background, positioning itself as a secondary education. M. Foucault also talks about the loss of the author, but notes that this loss is relative and is not observed in all cases. The author becomes different, his image is variable, acts as a "variable" characteristic of the information space [23, p. 40].

The phatic perspective of understanding AI culture allows philosophers to notice the impact artificial intelligence has on humans, transforming their perception of reality and themselves.

A critical perspective on understanding AI culture

It has always been obvious to philosophers that the relationship between man and machine cannot be simple, that technology produces many problems that require critical reflection. The understanding of AI culture in this case stems from the idea of confrontation between man and machine, their rivalry, and the harm that AI can cause to a person. We are also talking about the possibility and danger of a person losing their own uniqueness next to a car.

R. Descartes also says that machines are not able to find a soul and full-fledged thinking, their mind cannot be compared with a human one, even if it looks like this [14, p. 283]. He is opposed by J.O. de Lametri, who equalizes man and machine on the basis of knowledge about the physiology of the human body [24, p. 184]. In the early stages of the development of technology, the danger of being compared to a machine was seen by philosophers as one of the most serious; with the acquisition of new capabilities by machines, scientists gradually saw many more threats coming from technology.

N. A. Berdyaev shows considerable technopessimism in his work "Man and Machine". The philosopher declares the danger for man to be displaced by a machine, about the distance of man in the world of machines from his own nature, about the blow to humanism inflicted by technology. What depresses the author in this situation is that a person can no longer get rid of technology and is forced to adapt to it, to survive in a world filled with existential threats, in conditions when he may lose emotions, soul, when "the mental and emotional element is fading in modern civilization" [25, p. 23].

M. Heidegger in his work "The Question of Technology" (1954), like N. A. Berdyaev, speaks about the danger of thoughtlessly accepting the comforts that machines provide, without philosophical and cultural understanding of the harm that they can cause to a person. In this case, a person risks becoming a slave to machines, forgetting how to live without them, and doing without their help. According to M. Heidegger, machines cannot have only a technological, instrumental essence: "... The essence of technology is not something technical at all" [26, p. 221], to think so means to be deceived and expose oneself to the danger of becoming dependent on machines.

Many philosophers are beginning to declare the negative impact of technology on the public and personal sphere, and they do this long before such an impact took shape and became obvious. V. Sombart also suggests taking a critical look at the benefits brought by machines, since, in his opinion, the need for this benefit is produced by the very existence of machines [27, s. 305].

The destructive power of machines prompts many philosophers to make gloomy predictions about their role in a new type of world wars, more technologically advanced and deadly than before. According to D. Bell, computers transform, radically transform war, give it a new face, and war changes "under the "terrible" control of science" [20, p. 28]. A war with the use of AI can not only cause huge damage to humanity, but also destroy it altogether. This thesis is becoming the most powerful argument in cultural philosophical studies from a critical perspective, which have been increasing in recent years.

Conclusion

It is obvious that the humanistic, existential, cultural essence is no less important for AI than the technological one. That is why representatives of the scientific community realized the possibility and even the inevitability of the appearance of this phenomenon long before its scientific and technological design and hurried to start a discussion about the relationship between natural and artificial intelligence, about the likelihood of creating an AI that completely repeats human consciousness, about the dangers that AI can contain.

At all times, the basis of cultural and philosophical research in the field of technology, technology, and AI has been the desire to understand the essence of AI culture, to understand the possibility of repeating the thought process and human consciousness with the help of machines. Over the centuries of understanding AI, several points of view on this cultural phenomenon have emerged (angles, ways of seeing). The interiorization perspective was the first. It represents an assessment by philosophers (such as T. Hobbes, G.V. Leibniz, R. Descartes, B. Spinoza, D. Hume, etc.) of the experience of interaction between man and machine, the possibility of their equality or inequality, the organization of their coexistence, the influence of machines on culture and on the essence of man. The phatic perspective, in fact, is also an interiorization one, but it focuses on issues of dialogue, human-machine communication, whether technology is capable of pushing a person into the background, displacing him from the communicative space, transforming the world in such a way that a person ceases to be the main communicant in it (G. Rickert, E. Toffler, J. Derrida The critical perspective of understanding AI culture turned out to be the most widespread in the studies of the XX–XXI centuries (N. A. Berdyaev, M. Heidegger, V. Sombart, D. Bell, etc.), when the dangers and threats that come from AI and require a philosophical, cultural view became obvious.

The prospects of the research conducted in this paper consist in applying the theoretical constructions of philosophers of various scientific schools and historical periods regarding technology and AI to the current situation of the development of artificial intelligence, computers and neural networks. Today we can see how accurate the visionary statements of representatives of cultural philosophy were about AI, about the spread of technology and technology in human and social life, and evaluate the logical, cultural and other reasons why philosophers have been able to build various ways of understanding AI over the centuries.

References
1. Aristotle. (1978). Works: in 4 volumes. V. 2. Moscow: Mysl.
2. Alekseeva, I. Yu. (1993). Human knowledge and its computer image. Moscow: Institute of Philosophy RAS.
3. Ushkin, S. G. (2024). Leviathan 2.0: how did Thomas Hobbes predict the revolution of artificial intelligence and big data? Monitoring of public opinion: economic and social changes, 1, 276–283. doi:10.14515/monitoring.2024.1.2525
4. Haugeland, J. (1985). Artificial intelligence: The very idea. London: MIT Press, Cambridge, MA.
5. Klyueva, N. Yu. (2017). The influence of Leibniz’s idea on the development of computer science and research in the field of artificial intelligence. Bulletin of Moscow University. 7: Philosophy, 4, 79–92.
6. Margolin, I. D., & Dubovskaya, N. P. (2018). Main stages of development of artificial intelligence. Young scientist, 20(206), 23–26.
7. Laufer, K. M. (2022). Descartes Vs. Spinoza: is machine civilization possible? In: Century XXI. Digitalization: challenges, risks, prospects: materials of the international scientific and practical conference (pp. 13–21). Moscow: MIET.
8. Bryushinkin, V. N. (1990). Kant and “artificial intelligence”: models of the world. Kant collection: Interuniversity thematic collection of scientific works, 1(15), 80–89.
9. Kozolupenko, D. P. (2023). Between consciousness and intellect: the transcendental philosophy of I. Kant as a theoretical basis for research in the field of artificial intelligence. In: Transcendental turn in modern philosophy – 8: Metaphysics, epistemology, cognitive science and artificial intelligence: collection of abstracts of the international scientific conference (pp. 117–121). Moscow: RSUH.
10. Mikhailovsky, A. V. (2024). Werner Sombart as a critic of technology. Sociological Review, 23-2, 260–282. doi:10.17323/1728-192x-2024-2-260-282
11. Kostryukov, V. E. (2023). “The Death of the Author” in the era of artificial intelligence: the boundaries of man and the neural network. In: Man in the information society: collection of materials of the II international scientific and practical conference (pp. 855–858). Samara: SNIU.
12. Hobbes, T. (2022). Leviathan. Human nature. About freedom and necessity. Moscow: Azbuka.
13. Leibniz, G. V. (1984). Works: In 4 vols. V. 3. Moscow: Mysl.
14. Descartes, R. (1989). Discourse on the method to correctly direct your mind and find the truth in the sciences. In: R. Descartes. Works: in 2 vols. V. 1 (рр. 250–296). Moscow: Mysl.
15. Spinoza, B. (1957). Treatise on the improvement of the mind. In: B. Spinoza. Selected works: in 2 vols. V. 1 (рр. 317–358). Moscow: Gospolitizdat.
16. Kant, I. (1994). Critique of Pure Reason. Moscow: Mysl.
17. Hume, D. (1996). Works: in 2 vols. V. 2. Moscow: Mysl.
18. Rickert, G. (1998). Sciences about nature and sciences about culture. Moscow: Republic.
19. Toffler, E. (1999). The Third Wave. Moscow: AST.
20. Bell, D. (2004). The Coming Post-Industrial Society. Experience in social forecasting. Moscow: Academia.
21. Derrida, J. (2000). Writing and difference. St. Petersburg: Academician. project.
22. Bart, R. (1989). Death of the author. In: R. Bart. Selected works: Semiotics: Poetics (pp. 384–391). Moscow: Progress.
23. Foucault, M. (1996). What is an author? In: M. Foucault. The will to truth: beyond knowledge, power and sexuality: collection (pp. 7–46). Moscow: Magisterium, Castal.
24. Lametrie, J. O. (1983). Works. Moscow: Mysl.
25. Berdyaev, N. A. (1933). Man and machine (the problem of sociology and metaphysics of technology). Path, 38, 3–38.
26. Heidegger, M. (1993). The question of technology. In: M. Heidegger. Time and Being: Articles and Speeches (pp. 221–237). Moscow: Republic.
27. Sombart, W. (1934). Deutscher Sozialismus. Berlin-Charlottenburg: Buchholz & Weisswange.

First Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

The subject of the research in the article submitted for publication in the journal "Man and Culture", as the author reflected in the title ("Cultural and philosophical foundations of artificial intelligence as a cultural phenomenon") and explained in the introduction, is the cultural phenomenon of philosophical understanding of artificial intelligence (AI), preceding its technological development. And although the author does not pay attention to the formalization of the object of research, it is quite obvious that it is a special theoretical discourse around the idea of AI, covered by the author in three perspectives: interiorization (T. Hobbes, G.V. Leibniz, R. Descartes, B. Spinoza, D. Hume, etc.), phatic (G. Rickert, E. Toffler, J. Derrida, R. Barth, M. Foucault, etc.) and critical (N. A. Berdyaev, M. Heidegger, V. Sombart, D. Bell, etc.). The author's typology of the perspectives of philosophical understanding of AI before its direct technological development is based on the identification of the main reason for the philosophers' appeal to the understanding of a non-existent, but probable phenomenon. The interiorization perspective is understood as an attempt by philosophers to evaluate the ever-expanding experience of human-machine interaction, including probable models of equality or inequality between man and machine within the framework of coexistence, as well as "the influence of machines on culture and on the essence of man." By the phatic perspective, the author understands a separate direction of interiorization discourse, focusing on issues of human-machine communication, as well as the ability of technology to "push a person into the background, oust him from the communicative space, transform the world in such a way that a person ceases to be the main communicant in it." The critical perspective, according to the author, is formed mainly around attempts by philosophers to assess the likely risks of the emergence and development of AI, including its uncontrolled transformation of man and society, the basic foundations of culture and civilization. The author draws attention to the visionary conclusions of individual philosophers who predicted and described some probable phenomena of social life associated with trends in technological development, i.e. logically modeling the probability of certain trends and phenomena, thinkers, according to the author, anticipated what is becoming reality before our eyes. In essence, the article submitted for consideration has a methodological character of problematization of the field of prospective research. The object itself (a special theoretical discourse around the idea of AI), as the reviewer notes, is a rather actively developed area. The novelty of the presented research is the author's attempt to propose his own typology of "visionary" (futurological) theoretical discourse. The practical application of the proposed typology is obviously expected in further research. In the same article, a typological model is outlined and the purpose of its application is defined, which is to establish "how accurate were the visionary statements of representatives of cultural philosophy about AI, about the spread of technology and technology in human and social life," as well as to assess the logical, cultural and other reasons "for which philosophers have been able to build various ways of thinking about AI." The reviewer notes that the proposed typological model has both strengths and weaknesses. The strong side should include the non-identity in terms of the content of the selected perspectives of understanding: from the first to the last, one can see the evolution of ways of understanding from exclusively abstract conclusions to more empirical ones, including the essential aspects of communication development and the obvious, under certain conditions, probable risks of social development. It is also likely that as the author delves into the problem, the corpus of analyzed literature will expand and, eventually, cover both the fundamental problem of the very possibility/impossibility of AI and the most urgent problem of technological singularity today. The reviewer emphasizes that the body of special literature that makes up the object of research in the presented article is defined only in outline and without its expansion, the advantages of the author's typological model presented to the reader are not obvious. The most methodically weak side of the author's model is the lack of specific criteria for the accuracy of "visionary statements of representatives of cultural philosophy about AI." The reviewer, in particular, draws the author's attention to the fact that understanding the realities of today through the prism of "visionary statements" or futurological concepts delayed in time into the past has a limit of relevance, which consists in limiting the optics of interpreting real phenomena of modern reality to the author within the conceptual framework of the "visionary statements" themselves. In other words, you can see only what and only how the thinker of the past drew attention to, while at the same time not noticing the richer palette of real-life phenomena. This is the so-called metaphysical paradox, which Aristotle drew attention to, but ignored within the framework of the Organon, and L. Wittgenstein emphasized with renewed vigor: does a person see only a part of reality named by a word or by conceivable models. To avoid such a deadlock within the framework of the typological model proposed by the author, it is possible only by constantly comparing the detectable "foreseen" with a different interpretation of this detectable. Thus, the subject of the study (the cultural phenomenon of philosophical understanding of artificial intelligence (AI), preceding its technological development) is presented by the author in his own original vision (in the author's typological model), which has a certain heuristic potential. The research methodology is based on the author's typology of a separate body of theoretical literature, limited by a thematic sample. The author identified a promising problem and proposed an original way to solve it. Thus, the author coped with the methodological task of problematization (problem statement), the goals set in the conclusion of the planned prospective research can be considered achievable under certain conditions. Therefore, the result achieved by the author deserves theoretical attention. The author explains the relevance of the chosen topic by saying that one can observe how certain "visionary statements of representatives of cultural philosophy about AI" come true today. It is really interesting to study the causes and conditions of such forecasts. The scientific novelty of the research, expressed in the typology of philosophical thought proposed by the author, suggests further practical application and is beyond doubt. The style of the text as a whole is maintained by the author scientifically, but the reviewer draws attention to the fact that some statements that practically lose their meaning should be formulated more clearly ("Aristotle also spoke with his theory of correct thinking", "A.V. Mikhailovsky evaluates as a criticism of V. Sombart's technique", etc.) — the text needs literary proofreading and proofreading. The structure of the article follows the logic of presenting the results of scientific research. The bibliography sufficiently reveals the problem area. The appeal to the opponents is correct and, in principle, sufficient, although the reviewer draws attention to the fact that the topic chosen by the author is extremely controversial and, most likely, the author will have heated heated discussions with colleagues. The article is of interest to the readership of the magazine "Man and Culture" and after stylistic refinement of individual statements can be recommended for publication.

Second Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

The author submitted his article "Cultural and philosophical foundations of artificial intelligence as a cultural phenomenon" to the journal "Man and Culture", in which a study of the philosophical and cultural justification of the possibility of the existence of artificial consciousness equal to human consciousness was conducted. The author proceeds in studying this issue from the fact that as a cultural phenomenon, artificial intelligence (AI) has a much richer and longer history. The study of the problem of the emergence of AI, its relationship to the essence of man, its role in society began long before its actual invention. AI culture attracted a wide variety of researchers, philosophers and cultural scientists, even at a time when artificial intelligence itself did not exist yet: philosophers expressed thoughts about science, technology and technology, which often sounded like prescience. The relevance of this research is due to the need for cultural justification of the interpenetration of philosophical thought and dynamically developing information technologies. Accordingly, the purpose of this study is to identify the cultural and philosophical basis of AI. To achieve this goal, the author sets the task of determining the stages of scientific development of this phenomenon in the period when it has not yet received modern technological design. As a methodological justification, the author applies an integrated approach that includes both general scientific research methods (analysis and synthesis, classification), as well as philosophical and content analysis of works devoted to the studied problems. The theoretical basis was the works of both world-famous researchers Hobbes T., Descartes R., Bell D., Toffler E., Berdyaev N. A., Heidegger M., and young Russian scientists. Based on the analysis of the scientific validity of the problem, the author concludes about the increased interest of the scientific community in various aspects of the problem under study and a sufficient volume of scientific papers. Despite the presence of a significant number of works in which modern philosophers find prerequisites for the development of AI in the works of philosophers of previous periods of scientific development, no holistic study has yet been conducted in which the cultural and philosophical foundations of AI as a cultural phenomenon would be systematized. The scientific novelty of this study lies in the systematization of existing philosophical works from the point of view of their study of the potential of artificial intelligence. The practical significance of the research lies in the possibility of applying the theoretical constructions of philosophers of various scientific schools and historical periods regarding technology and AI to the current situation of the development of artificial intelligence, computers and neural networks. For the purposes of the study, the author identifies several perspectives of cultural and philosophical understanding of AI culture: interiorization, phatic, critical. Thus, the author attributes the interiorization perspective (T. Hobbos, R. Descartes, etc.) to the attempt of philosophers to evaluate the human experience in relation to technology and technology, to highlight aspects of the practical coexistence of man and technology, to assess the essence of artificial intelligence, which will once be created on the model of natural, to figure out whether it is able to completely repeat the human consciousness, how it will affect culture. The interiorization approach was the first in the understanding of AI by scientists who needed to adapt to a world gradually filled with technology and technology, explain the complex connections between humans and machines, analyze and describe the first experience of perceiving technology and technology, and predict further developments. The phatic perspective of the perception of technology and AI (M. Foucault, G. Rickert, E. Toffler) allows philosophers to understand, firstly, whether this communication is possible (or whether man and machine are so different from each other that it is unrealistic), secondly, what it will be (if possible), thirdly, will it be conducted on an equal footing, that is, is it conceivable to create a machine commensurate in mind, in consciousness with a person? Understanding AI culture from a critical perspective (N.A. Berdyaev, M. Heidegger, D. Bell) follows from the idea of confrontation between man and machine, their rivalry, the harm that AI can cause to a person, as well as the danger of a person losing his own uniqueness next to a machine. In conclusion, the author presents a conclusion on the conducted research, which contains all the key provisions of the presented material. It seems that the author in his material touched upon relevant and interesting issues for modern socio-humanitarian knowledge, choosing a topic for analysis, consideration of which in scientific research discourse will entail certain changes in the established approaches and directions of analysis of the problem addressed in the presented article. The results obtained allow us to assert that a comprehensive study of the impact of technological progress on socio-cultural transformations is of undoubted practical cultural interest and can serve as a source of further research. The material presented in the work has a clear, logically structured structure that contributes to a more complete assimilation of the material. An adequate choice of methodological base also contributes to this. The text of the article is designed in a scientific style. The bibliographic list of the research consists of 27 sources, which seems sufficient for generalization and analysis of scientific discourse. The author fulfilled his goal, obtained certain scientific results that made it possible to summarize the material, showed deep knowledge of the studied issues. It should be noted that the article may be of interest to readers and deserves to be published in a reputable scientific publication.