Library
|
Your profile |
Man and Culture
Reference:
Belikova E.K.
Cultural and philosophical foundations of artificial intelligence as a cultural phenomenon
// Man and Culture.
2024. № 4.
P. 88-99.
DOI: 10.25136/2409-8744.2024.4.71324 EDN: PDQELM URL: https://en.nbpublish.com/library_read_article.php?id=71324
Cultural and philosophical foundations of artificial intelligence as a cultural phenomenon
DOI: 10.25136/2409-8744.2024.4.71324EDN: PDQELMReceived: 23-07-2024Published: 02-08-2024Abstract: The object of research is artificial intelligence (AI); the subject of the study is the basis of its development by representatives of the philosophy of culture. It is noted that the philosophical understanding of technology, technology and AI began much earlier than artificial intelligence was created as a technological phenomenon, which indicates the essence of AI as a cultural phenomenon. Since modern times, representatives of the philosophy of culture have tried to comprehend AI, and their theoretical constructions today often look prophetic, which have found their full confirmation in our time. The earliest perspective of understanding the culture of AI was the internalization one (evaluation of human experience in relation to technology), which was then supplemented by the phatic (understanding of the possibility of communication with AI) and critical (criticism of interaction with technology and AI). The research was carried out using general scientific methods of analysis and synthesis, observation, description, etc. Special methods were used: systemic-structural, dialectical, cultural-historical. The main approach to the problem has become interdisciplinary. The scientific novelty lies in identifying different angles of understanding the culture of AI in the history of cultural and philosophical thought. It is noted that the internalization perspective is most clearly manifested in the works of Hobbes, Leibniz, Descartes, Spinoza, Hume and other scientists seeking to understand the similarities between human and machine thinking, the possibility of repeating natural intelligence in artificial intelligence. A phatic perspective on understanding the culture of AI is characteristic of the works of Rickert, Toffler, Derrida, Barthes, Foucault and other philosophers who focused attention on the nature of the relationship between man and machine. A critical analysis of AI is manifested in the studies of Berdyaev, Heidegger, Sombart, Bell and other authors who talk about the dangers associated with AI. Keywords: artificial intelligence, technique, technology, philosophy of culture, cultural-philosophical foundations, cultural phenomenon, interiorization perspective, phatic perspective, critical perspective, communication with AIThis article is automatically translated. Introduction Artificial intelligence (AI) as a technological phenomenon emerged in the second half of the twentieth century. After the beginning of its development by representatives of natural science disciplines within the framework of computerization and digitalization, however, as a cultural phenomenon, paradoxically, it has a much richer and longer history. The fact of the development of the problem of the emergence of AI, its relationship with the essence of man, its role in society, which began long before its invention by scientists, cannot be accidental and deserves careful analysis. AI culture attracted a wide variety of researchers, philosophers and cultural scientists, even at a time when artificial intelligence itself did not exist yet. Based on the analysis of the essence of man, his consciousness and thinking, philosophers expressed thoughts about science, technology and technology, which often sounded like predictions. Aristotle also spoke about the creation of creatures that could be built like a human and would be able to match and replace a human being in his "Logic" [1]. Syllogical logic, created by the ancient Greek philosopher, remains one of the key AI strategies today. However, in the philosophical studies of Modern times, the Enlightenment, and postmodernism, such research has become especially substantive and promising. Literature review Modern scientists are increasingly turning their attention to the constructions and conclusions made by philosophers of previous times. The search in the works of philosophers of previous centuries and decades for the cultural and philosophical foundations of a particular phenomenon, including AI, allows us to better understand this phenomenon. Modern philosophers and cultural scientists read the works of the classics with new eyes and see in them many threads leading to AI. Alekseeva reveals that the computer image of human knowledge has long been the subject of philosophical research, in particular, she speaks about the prophetic predictions of T. Hobbes' AI [2, p. 184]. S. G. Ushkin states that T. Hobbes in the book "Leviathan" predicted the revolution of artificial intelligence and big data [3]. J. Hogeland calls T. Hobbes is the "grandfather of artificial intelligence" [4, p. 23]. N. Yu. Klyueva, among G. V. Leibniz's ideas that influenced the development of computer science and research in the field of artificial intelligence, highlights the scientist's desire to "reduce all scientific reasoning to mathematical calculations" [5, p. 82]. I. D. Margolin and N. P. Dubovskaya say that in the writings of B. Spinoza, a "hypothesis about a physical symbolic system" is formulated which will become the basis for research in the field of artificial intelligence" [6, p. 23]. K. M. Laufer, having studied B. Spinoza's position on the social essence of man, states that the philosopher denied the possibility of a complete repetition of the human mind by a machine incapable of social interaction [7, p. 224]. V. N. Bryushinkin [8] and D. P. Kozolupenko [9] see signs of AI, theoretical foundations for research in the field of AI in the model of the world constructed by I. Kant. A.V. Mikhailovsky characterizes V. Sombart as a critic of technology [10]. V. E. Kostryukov refers to the theory of the "death of the author" developed by postmodernists and connects it with the idea of defining the boundaries of a person and a neural network [11]. Despite the presence of a significant number of works in which modern philosophers find prerequisites for the development of AI in the works of philosophers of previous periods of scientific development, no holistic study has yet been conducted in which the cultural and philosophical foundations of AI as a cultural phenomenon would be systematized. The purpose of the article is to identify the cultural and philosophical foundations of AI, to determine the stages of scientific development of this phenomenon in the period when it has not yet received modern technological design. The main part The forerunner of the construction of AI culture, attempts to substantiate it in a culturally philosophical sense can be considered the research of many philosophers of the period when AI technologically did not yet exist. There are several perspectives of cultural and philosophical understanding of AI culture, of which the most significant are interiorization, phatic, and critical. The interiorization perspective of understanding AI culture The interiorization perspective is chosen when philosophers try to evaluate human experience in relation to technology and technology, highlight aspects of the practical coexistence of man and technology, assess the essence of artificial intelligence, which will once be created on the model of natural, to figure out whether it is capable of completely repeating human consciousness, how it will affect culture. The visionary image of artificial intelligence coexisting with man and striving to dictate rules to him, providing protection for this, is created by T. Hobbes in Leviathan. The huge Leviathan depicted by the author "retains its resemblance to the natural one", since the processes in its body are carried out according to the same rational laws as those of the natural one [12, p. 269]. G.V. Leibniz, who believed in the power of science and technology and sought to "come up with a certain alphabet of human thoughts" [13, p. 414], which is very similar to modern programming languages, speaks about the rational structure of human thinking and the possibility of its reasonable description. R. Descartes considers human thinking as a completely material phenomenon based on universal models embedded in intelligence. According to R. Descartes, thought is decomposed into "operations of our intellect" [14, p. 182], and this process can be performed as many times as necessary to solve any most difficult problem. Descartes' approach to thinking as a set of operations, and to man as a machine ("God created our body like a machine and wished it to act as a universal tool" [Ibid., p. 469]) proves that the creation of a machine thinking like a man is quite possible. B. Spinoza also believes in the boundless power of reason, who understands man as a unity of the bodily, mental and social and is confident that the human mind is constantly being improved, and as an operational system: "... By his natural power, he creates mental tools for himself (instrumenta intellectualia), from which he acquires other powers for other mental works, and from these works other tools" [15, p. 329]. I. Kant speaks about modeling the process of cognition, whose reflections on reason, thought, thinking lead the author to the idea of the possibility of autonomous thinking existing outside of man, about the "schematism of the mind" [16, p. 127], the division of thought into maxims (operations), etc. In the philosophy of Enlightenment, the interiorization perspective of understanding technology is found in the works of D. Hume, who spoke about "atomic impressions", which make up more complex mental compositions. D. Hume compares man, animals, and the whole world with machines, talking about their similar structure and similar causes (grounds for occurrence) [17, p. 400]. The image of a machine helps a scientist to comprehend all the phenomena of the world, living and inanimate, and he comes to the idea of an "artificial machine", the sources of which he calls "reason and premeditation" [Ibid., p. 430]. The interiorization approach was the first in the understanding of AI by scientists who needed to adapt to a world gradually filled with technology and technology, explain the complex connections between humans and machines, analyze and describe the first experience of perceiving technology and technology, and predict further developments. This perspective of the cultural and philosophical understanding of AI remains the main one in the further development of philosophical thought, but at the same time it is complemented by other perspectives that demonstrate the desire of scientists to understand the essence of technology and technology in more depth and detail. A fatalistic perspective on understanding AI culture Gradually, a person realizes that in the perception of technology and technology, a very important point is the establishment of relationships with them, the organization of communication, communication. This forms a phatic perspective on the perception of technology and AI. Philosophers need to understand, firstly, whether this communication is possible (or whether a person and a machine are so different from each other that it is unrealistic), secondly, what it will be (if possible), and thirdly, whether it will be conducted on equal terms, that is, whether it is conceivable to create a machine commensurate with to the mind, according to consciousness with a person. These issues are solved in different ways. The neo-Kantian G. Rickert speaks about the insurmountable boundary between the natural and the artificial, between the humanitarian and the technical, forming a point of view characterized by "Rickert's pessimism". The main difference between these spheres, according to G. Rickert, respectively, is the ability and inability to express individuality. The creation of artificial intelligence is questioned by the philosopher, since natural sciences "are unable to introduce reality into their concepts of individuality" [18, p. 112]. The issues of dialogue with machines are becoming especially relevant for postmodern philosophers who observe the formation and development of post-industrial society and the expansion of the use of machines primarily due to their information function. Philosophers make very interesting observations, many of which are also visionary. Thus, E. Toffler in the work "The Third Wave" (1980) predicts the individualization and personification of virtual space, their focus on "personal information requests" [19, p. 560], which we observe today with the advent of artificial neural networks. D. Bell speaks of AI as a phenomenon with which a person correlates his "I", trying to realize myself in comparison with the machine [20]. The concept of the "death of the author", which is important for postmodernism, correlates with the idea of AI. J. Derrida notes that the author (subject) in the modern world loses its uniqueness, uniqueness, blurs, loses its "point simplicity" [21, p. 288]. R. Barth declares the elimination of the author at all levels of the text and its replacement by a scriptwriter which does not exist outside the text, which does not possess any individual traits, emotions, moods, ideas, creative potentials. The scriptwriter creates the text, especially without thinking and very controversially, using "an immense dictionary from which he draws his writing, which does not know a stop" [22, p. 389]. The main thing for the text is not the author, but the reader, which is quite consistent with the modern way of authorship of AI, which is in the background, positioning itself as a secondary education. M. Foucault also talks about the loss of the author, but notes that this loss is relative and is not observed in all cases. The author becomes different, his image is variable, acts as a "variable" characteristic of the information space [23, p. 40]. The phatic perspective of understanding AI culture allows philosophers to notice the impact artificial intelligence has on humans, transforming their perception of reality and themselves. A critical perspective on understanding AI culture It has always been obvious to philosophers that the relationship between man and machine cannot be simple, that technology produces many problems that require critical reflection. The understanding of AI culture in this case stems from the idea of confrontation between man and machine, their rivalry, and the harm that AI can cause to a person. We are also talking about the possibility and danger of a person losing their own uniqueness next to a car. R. Descartes also says that machines are not able to find a soul and full-fledged thinking, their mind cannot be compared with a human one, even if it looks like this [14, p. 283]. He is opposed by J.O. de Lametri, who equalizes man and machine on the basis of knowledge about the physiology of the human body [24, p. 184]. In the early stages of the development of technology, the danger of being compared to a machine was seen by philosophers as one of the most serious; with the acquisition of new capabilities by machines, scientists gradually saw many more threats coming from technology. N. A. Berdyaev shows considerable technopessimism in his work "Man and Machine". The philosopher declares the danger for man to be displaced by a machine, about the distance of man in the world of machines from his own nature, about the blow to humanism inflicted by technology. What depresses the author in this situation is that a person can no longer get rid of technology and is forced to adapt to it, to survive in a world filled with existential threats, in conditions when he may lose emotions, soul, when "the mental and emotional element is fading in modern civilization" [25, p. 23]. M. Heidegger in his work "The Question of Technology" (1954), like N. A. Berdyaev, speaks about the danger of thoughtlessly accepting the comforts that machines provide, without philosophical and cultural understanding of the harm that they can cause to a person. In this case, a person risks becoming a slave to machines, forgetting how to live without them, and doing without their help. According to M. Heidegger, machines cannot have only a technological, instrumental essence: "... The essence of technology is not something technical at all" [26, p. 221], to think so means to be deceived and expose oneself to the danger of becoming dependent on machines. Many philosophers are beginning to declare the negative impact of technology on the public and personal sphere, and they do this long before such an impact took shape and became obvious. V. Sombart also suggests taking a critical look at the benefits brought by machines, since, in his opinion, the need for this benefit is produced by the very existence of machines [27, s. 305]. The destructive power of machines prompts many philosophers to make gloomy predictions about their role in a new type of world wars, more technologically advanced and deadly than before. According to D. Bell, computers transform, radically transform war, give it a new face, and war changes "under the "terrible" control of science" [20, p. 28]. A war with the use of AI can not only cause huge damage to humanity, but also destroy it altogether. This thesis is becoming the most powerful argument in cultural philosophical studies from a critical perspective, which have been increasing in recent years. Conclusion It is obvious that the humanistic, existential, cultural essence is no less important for AI than the technological one. That is why representatives of the scientific community realized the possibility and even the inevitability of the appearance of this phenomenon long before its scientific and technological design and hurried to start a discussion about the relationship between natural and artificial intelligence, about the likelihood of creating an AI that completely repeats human consciousness, about the dangers that AI can contain. At all times, the basis of cultural and philosophical research in the field of technology, technology, and AI has been the desire to understand the essence of AI culture, to understand the possibility of repeating the thought process and human consciousness with the help of machines. Over the centuries of understanding AI, several points of view on this cultural phenomenon have emerged (angles, ways of seeing). The interiorization perspective was the first. It represents an assessment by philosophers (such as T. Hobbes, G.V. Leibniz, R. Descartes, B. Spinoza, D. Hume, etc.) of the experience of interaction between man and machine, the possibility of their equality or inequality, the organization of their coexistence, the influence of machines on culture and on the essence of man. The phatic perspective, in fact, is also an interiorization one, but it focuses on issues of dialogue, human-machine communication, whether technology is capable of pushing a person into the background, displacing him from the communicative space, transforming the world in such a way that a person ceases to be the main communicant in it (G. Rickert, E. Toffler, J. Derrida The critical perspective of understanding AI culture turned out to be the most widespread in the studies of the XX–XXI centuries (N. A. Berdyaev, M. Heidegger, V. Sombart, D. Bell, etc.), when the dangers and threats that come from AI and require a philosophical, cultural view became obvious. The prospects of the research conducted in this paper consist in applying the theoretical constructions of philosophers of various scientific schools and historical periods regarding technology and AI to the current situation of the development of artificial intelligence, computers and neural networks. Today we can see how accurate the visionary statements of representatives of cultural philosophy were about AI, about the spread of technology and technology in human and social life, and evaluate the logical, cultural and other reasons why philosophers have been able to build various ways of understanding AI over the centuries. References
1. Aristotle. (1978). Works: in 4 volumes. V. 2. Moscow: Mysl.
2. Alekseeva, I. Yu. (1993). Human knowledge and its computer image. Moscow: Institute of Philosophy RAS. 3. Ushkin, S. G. (2024). Leviathan 2.0: how did Thomas Hobbes predict the revolution of artificial intelligence and big data? Monitoring of public opinion: economic and social changes, 1, 276–283. doi:10.14515/monitoring.2024.1.2525 4. Haugeland, J. (1985). Artificial intelligence: The very idea. London: MIT Press, Cambridge, MA. 5. Klyueva, N. Yu. (2017). The influence of Leibniz’s idea on the development of computer science and research in the field of artificial intelligence. Bulletin of Moscow University. 7: Philosophy, 4, 79–92. 6. Margolin, I. D., & Dubovskaya, N. P. (2018). Main stages of development of artificial intelligence. Young scientist, 20(206), 23–26. 7. Laufer, K. M. (2022). Descartes Vs. Spinoza: is machine civilization possible? In: Century XXI. Digitalization: challenges, risks, prospects: materials of the international scientific and practical conference (pp. 13–21). Moscow: MIET. 8. Bryushinkin, V. N. (1990). Kant and “artificial intelligence”: models of the world. Kant collection: Interuniversity thematic collection of scientific works, 1(15), 80–89. 9. Kozolupenko, D. P. (2023). Between consciousness and intellect: the transcendental philosophy of I. Kant as a theoretical basis for research in the field of artificial intelligence. In: Transcendental turn in modern philosophy – 8: Metaphysics, epistemology, cognitive science and artificial intelligence: collection of abstracts of the international scientific conference (pp. 117–121). Moscow: RSUH. 10. Mikhailovsky, A. V. (2024). Werner Sombart as a critic of technology. Sociological Review, 23-2, 260–282. doi:10.17323/1728-192x-2024-2-260-282 11. Kostryukov, V. E. (2023). “The Death of the Author” in the era of artificial intelligence: the boundaries of man and the neural network. In: Man in the information society: collection of materials of the II international scientific and practical conference (pp. 855–858). Samara: SNIU. 12. Hobbes, T. (2022). Leviathan. Human nature. About freedom and necessity. Moscow: Azbuka. 13. Leibniz, G. V. (1984). Works: In 4 vols. V. 3. Moscow: Mysl. 14. Descartes, R. (1989). Discourse on the method to correctly direct your mind and find the truth in the sciences. In: R. Descartes. Works: in 2 vols. V. 1 (рр. 250–296). Moscow: Mysl. 15. Spinoza, B. (1957). Treatise on the improvement of the mind. In: B. Spinoza. Selected works: in 2 vols. V. 1 (рр. 317–358). Moscow: Gospolitizdat. 16. Kant, I. (1994). Critique of Pure Reason. Moscow: Mysl. 17. Hume, D. (1996). Works: in 2 vols. V. 2. Moscow: Mysl. 18. Rickert, G. (1998). Sciences about nature and sciences about culture. Moscow: Republic. 19. Toffler, E. (1999). The Third Wave. Moscow: AST. 20. Bell, D. (2004). The Coming Post-Industrial Society. Experience in social forecasting. Moscow: Academia. 21. Derrida, J. (2000). Writing and difference. St. Petersburg: Academician. project. 22. Bart, R. (1989). Death of the author. In: R. Bart. Selected works: Semiotics: Poetics (pp. 384–391). Moscow: Progress. 23. Foucault, M. (1996). What is an author? In: M. Foucault. The will to truth: beyond knowledge, power and sexuality: collection (pp. 7–46). Moscow: Magisterium, Castal. 24. Lametrie, J. O. (1983). Works. Moscow: Mysl. 25. Berdyaev, N. A. (1933). Man and machine (the problem of sociology and metaphysics of technology). Path, 38, 3–38. 26. Heidegger, M. (1993). The question of technology. In: M. Heidegger. Time and Being: Articles and Speeches (pp. 221–237). Moscow: Republic. 27. Sombart, W. (1934). Deutscher Sozialismus. Berlin-Charlottenburg: Buchholz & Weisswange.
First Peer Review
Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
Second Peer Review
Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
|