Translate this page:
Please select your language to translate the article


You can just close the window to don't translate
Library
Your profile

Back to contents

Philosophy and Culture
Reference:

Basic questions of the philosophy of artificial intelligence

Belikova Evgeniya Konstantinovna

ORCID: 0009-0001-7575-024X

PhD in Cultural Studies

Associate Professor, Department of English Language, Lomonosov Moscow State University

119991, Russia, Western Administrative District, Moscow, Leninskie Gory str., 1, building 52

jkbelikova@yandex.ru
Other publications by this author
 

 

DOI:

10.7256/2454-0757.2024.1.69543

EDN:

PURYRC

Received:

08-01-2024


Published:

15-01-2024


Abstract: The object of the study is a special branch of scientific knowledge, formed at the intersection of interests of various humanitarian disciplines – the philosophy of artificial intelligence; the subject of the study is the problems it considers. The author identifies the main traditional questions that the philosophy of artificial intelligence tried to answer earlier. Such questions in the earlier stages of the development of science were concentrated around the possibility of artificial intelligence to become aware of itself, learn to think, feel and create like a person. Scientists since the 1960s interested in the problems of equality of natural and artificial intelligence, the ability of a computer to show benevolence or hostility towards its creators. The research was carried out using a historical and philosophical analysis of issues in the philosophy of artificial intelligence, the method of interpretive synthesis, etc. Systemic-structural, dialectical, cultural-historical, value-based, interdisciplinary approaches to the problem were used. The scientific novelty of the research lies in identifying the problems of the philosophy of artificial intelligence that are relevant at this stage of development of this scientific field. It is noted that the dynamics of the philosophy of artificial intelligence are significantly influenced by the fact that “strong” artificial intelligence has not been created for several decades. This has caused a transformation in the research field of the philosophy of artificial intelligence, and at the moment this branch of scientific knowledge is considering new questions, including why it is not possible to create “strong” AI that completely replicates human thinking; can a computer think, albeit differently, than a human; what is the difference between human and machine thinking; who should be responsible for the decisions and actions of artificial intelligence, and etc. The issues facing the philosophy of artificial intelligence are constantly updated.


Keywords:

artificial intelligence, philosophy, culture, information technology, computer, Internet, artificial intelligence philosophy, threats, thinking, consciousness

This article is automatically translated.

 

Introduction

Information technology, computers, the Internet and related technological advances have become firmly embedded in the life of modern man. They provide us with many convenient services and are actively used in many areas of human life and activity: in the field of data storage and retrieval, in medical diagnostics, navigation, electronic (remote) sales, education, accounting, industry, etc. The digital transformation of society in the conditions of modern man-made civilization has affected everyone. Communication systems (messengers), mobile banking and trading applications, satellite navigation, electronic voice assistants, virtual assistants, chatbots, vacuum cleaner robots, e-ticket purchase services, electronic translation, online libraries, neural networks and much more have become a familiar part of the life of an ordinary person. More and more people are connecting to the system of using technological advances, and the psychological, emotional, and ideological crisis caused by such rapid life changes is becoming deeper and deeper. Understanding the problems associated with information technology in people's perception also occurs in philosophy, where an integral scientific branch has developed – the philosophy of artificial intelligence.

 

Literature review

Computers and the formation of artificial intelligence (AI) based on them attracted the attention of philosophers almost immediately after the development of information technologies and computer systems began. In the middle and second half of the XX century. foreign and Russian researchers, representatives of various sciences, both exact and humanitarian, made a significant contribution to the development of problems of computer and human similarity: D. McCarthy [1], A. Turing [2], J. von Neumann [3], N. Wiener [4], J. Searle [5], E. V. Ilyenkov [6] and others. It was in the work of D. McCarthy that the term "artificial intelligence" arose and its definition was given: "The property of intelligent systems to perform creative functions that are traditionally considered the prerogative of man" [1]. The attention of scientists was directed to attempts to compare human and computer thinking; to explain whether it is possible to create such an AI that could replicate human consciousness and possess its creative abilities; to suggest how the future history of mankind will develop if such an AI is created. As a result of the progressive development of such scientific research in the 1960s and 1970s, the philosophy of AI emerged as an interdisciplinary scientific field existing on the basis of philosophy (closely related to ontology and epistemology), using its principles, categories and methodology. Undoubtedly, the prerequisites for the philosophy of AI appeared in ancient times, in the studies of ancient scientists, for example, Aristotle, devoted to the analysis of thinking, mind, consciousness, brain [7, p. 17]. The main question of AI philosophy at the stage of its existence in the second half of the twentieth century was: "Is AI capable of thinking, feeling, creating like a human?", and the factor prompting scientists to look for an answer to this question was the fear of rapidly developing artificial intelligent systems, the fear that, having reached the level of human thinking, they They will quickly surpass humans and become dangerous sources of uncontrollable consequences for society. Over time, taking into account modern technological advances, the range of issues of AI philosophy and their solutions are changing.

The purpose of the article is to identify issues that are relevant to the philosophy of artificial intelligence at this stage of the existence of this scientific branch, taking into account the current state and the latest trends in the development of AI.

 

The results of the study

The philosophical understanding of the AI problem has its origin in the idea of the similarity of artificial intelligence and human intelligence. Initially, computers were designed to perform complex calculations, then the ability to store large amounts of data was added to this function. Already at the very beginning of the development of information technology systems, it became obvious that in these two areas the capabilities of the machine significantly exceed human ones and the superiority of computers is constantly growing. However, there were (and still are) many human abilities, the formation of which in computers turned out to be more difficult than computational skills and memory, and it is these abilities that are the main ones for human intelligence. Gradually, within the framework of the AI philosophy, a list of human abilities, capabilities and skills has been developed that are recognized as necessary for a computer device to be characterized as AI, including skills:

– recognize human speech;

– execute commands given using natural language;

– to analyze the data and compare them;

– make generalizations and conclusions;

– offer solutions to problems and tasks;

– identify and classify images;

– to advise people;

– to put forward creative ideas, to develop ways to implement them;

– to carry out creative activities;

– self-study based on accumulated experience;

– to make forecasts of the development of events;

– to act in conditions of uncertainty;

– to experience human feelings , etc .

According to theoretical constructions, if a machine has these human abilities, then it can be called AI. In the first decades after the creation of computers, scientists and ordinary people were waiting for the appearance of an AI in the near future that would meet all the above criteria, which means that it would have a human consciousness. Representatives of the technical sciences were most confident in the possibility of creating AI, who personally observed how gradually the computer's thinking became more and more like a human one. So, the American physicist and mathematician J. von Neumann was sure that there is no fundamental difference between human and machine intelligence, which means that the emergence of a full–fledged AI is a matter of the near future [3, p. 3]. The American mathematician N. Wiener was also convinced of the inevitability of this [4, p. 251].

Fears and fears have appeared in wide public circles that, having realized itself, AI will see in man its rival and become an enemy to him, and possibly destroy humanity. These fears have spread in scientific circles, for example, the American mathematician and science fiction writer V. Vinge declares that the creation of intelligence superior to human intelligence will occur before 2030, and this forms a "threat to all human life" [8, p. 6]. In addition, fears of AI were vividly reflected in popular culture; the most famous example is the American movie "Terminator", released in 1984 and became a very popular franchise. It describes a future in which machines that have realized themselves and their power have taken over the Earth and seek to exterminate humanity. However, such AI, as is clear from observations and experience, has not appeared to date.

The slowdown (suspension) of the AI creation process has prompted researchers to formulate the idea of two types of machine intelligence. The most famous was the classification of J. Searle, according to which "strong" and "weak" AI are distinguished [5, p. 429]. There is still no way to appear and, perhaps, it is unrealistic to have a "strong" (universal, broad) AI, whose thinking is completely similar to that of a human, who is able to realize himself as a person. But the "weak" (narrow, applied) AI is developing more and more actively and is widely distributed, freely performing many tasks according to human commands, significantly simplifying production, service, everyday life, everyday life of people, serving the global virtual Internet environment, which is filled with a variety of information, helping people in production, construction, education and T. p .

A "weak" AI is able to imitate emotions, write prose works, poems, music and paintings practically like humans (at least, the result cannot be distinguished from the result of human creativity), but so far it is only imitation, reproduction of some components of human activity and human behavior according to embedded patterns, algorithms, patterns. Statements about "breakthroughs" in the creation of "strong" AI by scientists from different countries regularly appear in the modern communication space, but there is still no real evidence of this fact.

These are the conditions in which the philosophy of AI exists at this stage of its development. Under the influence of significant technological transformations in the existence of society, the philosophy of AI, first of all, the range of issues it considers, cannot but change.

One of the most important and even the main question facing the philosophy of AI now is: "Why has it been impossible to create a "strong" artificial intelligence for so long?" (A variant of the question: "Why does the created AI not become "strong", remains "weak"?").

Indeed, computers are increasingly penetrating into the life of mankind, transforming its organization, significantly influencing the structure of society, the main social processes. As the researchers note, "technological breakthroughs are carried out faster than their description" [9, p. 165], that is, the philosophical understanding of the technological changes taking place in society is not fast enough. A person has already largely lost the secrets of his personal life, his own privacy; information about him in a world organized with the help of information and communication technologies is available both to computer systems themselves and to the people who control them; the facial recognition system and other observer programs make it difficult for a person to be unnoticed. However, there is still some invisible line, a boundary that artificial intelligence cannot cross. For several decades now, he has not realized himself as a person, and man as an equal and rival in this world. Why is this happening?

Various representatives of the philosophy of AI are trying to answer this question. One of the most common answers is the following: "strong" AI cannot be created because "natural and artificial intelligence have different natures" [9, p. 161]. There are features of human consciousness that cannot be reproduced technologically, artificially, and such features include dynamism, the ability to develop, historical, social and cultural conditioning [Ibid.]. In this case, we also see an attempt to answer such a question of the philosophy of AI: "Is the nature of human and artificial intelligence the same?" This question is closely related to the main one: if the nature of intelligence is the same, then a "strong" AI will be created sooner or later, if it is different, then the creation of such an AI is fundamentally impossible.

Another way to explain why a "strong" AI is in no hurry to emerge is to rely on the connection of a person's mind with his physicality. Back in the second half of the 20th century, the Chilean biologist and philosopher F. Varela noted that human consciousness very strongly, sometimes unexpectedly, depends on its physicality [10], while the absence of a body in an artificial mind slows down its development and even excludes the formation of a "strong" AI. This argument is also found in the studies of modern philosophers [9, p. 166]. Without possessing a body with its own reactions, ways of knowing the world, methods of developing the sensory and intellectual sides of world perception, AI cannot fully function as an independent and self-developing thinking system.

Proponents of determinism, who bring biology and philosophy closer together and affirm the idea of the predetermination of all human development processes by biological laws, say that the creation of AI will be possible only when the computer mind develops in conditions of free will, when it needs to "constantly make decisions on the verge of biological and spiritual" [11, pp. 110-111]. Although in the history of mankind, people did not always have the opportunity to realize free will, development itself was based on the existence of the idea of free will and the possibility of choice, which contributed to the formation of human consciousness in its modern form. The absence of such conditions in the development of AI, determinists believe, hinders its development, the formation of human consciousness in it.

The lack of success in creating a "strong" AI has made the position of proponents of the theory of the Divine appearance of man more confident. If a person is not created by God, then the world is material and fully knowable. In this case, the creation of a "strong" AI that completely repeats human intelligence should be possible. However, AI similar to human does not appear in any way, which probably indicates in favor of the creation of humanity by higher forces. A person who is confident that he will definitely create an AI puts himself on a par with God. Modern Orthodox scientists are confident that the creation of a "strong" AI is impossible precisely for this reason [12].

There is also an opinion that it is not possible to create a "strong" AI because "modern science has not yet reached the level necessary to solve such a problem" [13, p. 10]. Human consciousness has turned out to be a more complex system than first thought, and at the moment science does not yet have the knowledge necessary to create AI. This is a completely materialistic idea, which does not deny the possibility of creating AI, but only pushes it back in time.

The question of why it is not possible to create a "strong" machine mind is the main one in the philosophy of AI of the modern period of the development of this science, however, there are several more questions that philosophers are trying to find answers to.

Speaking about the amazing successes that "weak" AI has achieved in the modern world, it is impossible not to consider such an urgent philosophical question: "Can a computer think?". The answer to it will not be unambiguous and largely depends on what is meant by thinking. If we talk about thinking as a human quality, then the machine, of course, does not yet possess it, but if we assume that thinking may be different, more primitive, then its presence can be recognized in a "weak" AI. As modern researchers say, "although the thinking of a machine cannot be called thinking in the literal sense, it still thinks" [14, p. 366]. Such thinking can be considered "conditionally reasonable" [7, p. 24]. This means that computer thinking has certain boundaries that a "weak" AI is not able to overcome in order to approach the level of thinking available to humans.

Another question that is raised in modern research on the philosophy of AI is: "Who should be responsible for the decisions made by artificial intelligence?". Even in the form that AI currently has, it is capable of many things, can perform a variety of actions, some of which can lead to ambiguous consequences. Scientists recognize that AI systems "even without consciousness are a powerful tool with enormous capabilities" [15, p. 68]. The answer to this question, in fact, lies on the surface. If an action is performed by a "weak" AI, then the person who gave the machine a certain installation and put such a program into it is responsible for it. If a "strong" AI is created, then the issue of responsibility will immediately become irrelevant, since it will be about the survival of humanity or, at least, about its life on new conditions next to another mind. The issue of responsibility is so complex that it stimulates even conspiracy theories. The scientific community suggests that scientists (including those commissioned by giant IT corporations) intentionally endow computers with subjectivity, talk about the potential independence of AI systems in order to remove "responsibility from the person who uses this technology" in case of negative consequences [16, p. 44].

Looking forward, the philosophy of AI takes into account the prospects for the development of computer intelligence and related technologies. At the moment, there is no doubt that more and more robots, robotic and automated systems are appearing around humans, and in the future they will become even more, for example, due to unmanned taxi cars, cleaning robots, courier robots, pet robots, sellers, companions, etc. Therefore, the philosophy of AI begins to raise questions that will be relevant in the near future: "Will beings with AI have rights?", "What will be their relationship with humans?", etc. [15, p. 72], "Should autonomous machines be considered personalities?" [17, p. 32]. The answers to these questions are important both in the case of creating a "strong" AI and in the case of using a "weak" one. The solution is still only speculative (as a rule, researchers recognize that the AI system will not be a personality [Ibid., p. 33]), but the very fact of their formulation, attempts to analyze them contribute to the development of a philosophical understanding of AI problems.

The philosophy of AI is a dynamic philosophical trend, actively developing under the influence of modern science and technology, vividly reacting to innovations in the subject of its research – artificial intelligence. Thus, scientists talk about the formation of its more private variety – the philosophy of neural networks, which represent the core of AI [7, p. 17]. Artificial neural networks are built on the model of natural ones, imitate and simulate the processes occurring in the human brain, occurring in neurons. Their activities are organized on the basis of AI and most closely resemble human thinking, but still do not lead to the creation of a "strong" AI. The complexity of artificial neural networks has been rapidly increasing in recent years, and the logic of their solutions is becoming more complicated, which requires philosophical reflection.

 

Conclusion

With the advent of AI philosophy as a separate scientific discipline, the focus of attention on the problem of computer intelligence shifted from technical sciences to philosophical ones. The philosophy of AI, which arose in the 1960s and 1970s, sets itself the task of understanding the relationship between man and machine, the relationship between human and machine intelligence. At the same time, the issues facing the philosophy of AI are not unchangeable, but are modified depending on the current state of the artificial intelligence itself. In modern conditions, when a "strong" machine mind has not been formed for several decades, the philosophy of AI considers the questions of why a "strong" AI is in no hurry to appear, whether a machine can think (if so, how), whether human and computer thinking is similar, who is responsible for decisions, accepted AI, whether automated beings with AI will have rights, whether they are personalities, etc. In this study, we have tried to consider the answers given by science to these complex questions, however, undoubtedly, their solution requires further in-depth and careful research. The analysis of these complex issues, the need for philosophers to realize it, serve as factors in the development of the philosophy of AI as a separate interdisciplinary scientific field.

The philosophy of AI is a dynamic scientific branch of philosophical knowledge, the development of which is largely determined by the transformations of modern information and communication technologies. The prospects for the development of AI philosophy problems and these problems themselves are determined by the constantly emerging new advances in artificial intelligence technology.

The introduction of AI into human life is so important for the development of the whole society that the understanding of this process and its consequences is conducted not only within the framework of philosophy. Sociologists will have to figure out how AI will affect the problem of unemployment, psychologists – how a person's self-esteem, his idea of his own identity, etc. will change under the influence of AI. Each of these areas of research has a philosophical component, which makes the philosophy of AI an important and promising area of modern humanitarian research.

References
1. McCarthy, J. (1960). Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I. Communications of the ACM, 3(4), 184–195. Retrieved from https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.111.8833&rep=rep1&type=pdf
2. Turing, A. (1950). Computing Machinery and Intelligence. Mind LIX, 236, 433–460.
3. Neumann, J. von. (2022). Computer and brain. Moscow: AST.
4. Wiener, N. (1968). Cybernetics, or Control and Communication in Animals and Machines. Moscow: Sov. radio.
5. Searle, J. R. (1980). Minds, brains and programs. Behavioral Sciences, 3, 415–431.
6. Ilyenkov, E. V. (1991). Philosophy and culture. Moscow: Politizdat.
7. Vorobyov, A. V., & Kudinov, V. A. (2021). History of the philosophy of neural networks as the core of artificial intelligence. Problems of onto-epistemological justification of mathematical and natural sciences, 12, 17–27.
8. Vinge, V. (2022). Singularity. Moscow: AST.
9. Grigoriev, A. D., Shemanov, K. A., & Kirillov, G. M. (2020). The problem of artificial intelligence in philosophy: the border between human and machine consciousness. Skif. Questions of student science, 1(41), 161–166.
10. Varela, F. (1991). The Embodied Mind: Cognitive Science. Cambridge: MIT Press.
11. Lazovsky, A. I. (2023). Determinism and free will in biology and human philosophy as a prerequisite for the creation of free consciousness in artificial intelligence. Bulletin of the Armavir State Pedagogical University, 3, 110–117.
12. Deryabin, N. I. (2022). Strong artificial intelligence of God. In Development of science, society, education in modern conditions: monograph. Z. A. Vodozhdokova, A. V. Volchkov, E. Yu. Golubinsky and others (pp. 224–237). Penza: Science and Enlightenment.
13. Penrose, R. (2003). The King’s New Mind: About Computers, Thinking and the Laws of Physics. Moscow: Editorial URSS.
14. Bashmakov, D. A., & Malkovsky, S. S. (2021). Philosophy of artificial intelligence. Academic journalism, 4, 362–373.
15. Georgiu, T. S. (2022). Philosophy of automation and artificial intelligence: from the mythological Talos to future cyborgs. Bulletin of the Moscow State Regional University. Series: Philosophical Sciences, 1, 68–75. doi:10.18384/2310-7227-2022-1-68-75
16. Dubrovsky, D. I., Efimov, A. R., Lepsky, V. E., & Slavin, B. B. (2022). Fetish of artificial intelligence. Philosophical Sciences, 65, 1, 44–71. doi:10.30727/0235-1188-2022-65-1-44-71
17. Mets, A. (2022). Can artificial intelligence become a member of the society as an autonomous personality? Journal of the Belarusian State University. Philosophy. Psychology, 1, 32–41.

Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

The subject of the research in the article submitted for publication in the journal Philosophy and Culture, as the author hinted in the title ("Basic questions of the philosophy of artificial intelligence"), is a certain set of issues that, in the author's opinion, can be considered the main one in the philosophy of artificial intelligence (AI). The title also indicates the object of research — the philosophy of artificial intelligence - to which the article is devoted. In the introduction, the author explains the object of the study as follows: "Understanding the problems associated with information technology in people's perception also occurs in philosophy, where an integral scientific branch has developed – the philosophy of artificial intelligence." In the goal-setting of the article, the author explains the subject of his attention: "to identify issues that are relevant to the philosophy of artificial intelligence at this stage of the existence of this scientific branch, taking into account the current state and the latest trends in the development of AI." The author's categorical attribution of AI philosophy to the scientific branch and the controversial premise of its consideration "taking into account the current state and the latest trends in the development of AI" are immediately striking, since, on the one hand, a significant part of the issues of AI philosophy are based on doubt about the possibility of embodying the idea of AI, and on the other, scientists in the AI technology industry have to to deny most of the abstract constructions of the philosophy of AI because of the non-identity of breakthrough AI technologies to the nature of natural intelligence. The basic reason for denying attempts to find the identity of AI with its natural prototype is a different functional load — processing disproportionately larger arrays and data streams at a time. Even at the level of technological chains of "weak AI", which are increasingly used today for decision-making in almost all branches of human activity, this basic distinctive feature is noticeable. Therefore, the question is why "for several decades a "strong" machine mind has not been formed," which, according to the author, "the philosophy of AI considers" ... as "why "strong" AI is in no hurry to appear," it can be formulated more radically: does a "strong" AI need a person to notice its appearance and what are the criteria for distinguishing the AI world from the world without AI? Is it possible that "strong" (autonomous) Will AI be so different from the natural mind that a person will not even notice its presence? Taking into account the above considerations, the reviewer notes that the attribution of the totality of the issues identified by the author to the main issues of the philosophy of AI remains debatable: some of the most pressing issues, in the opinion of the reviewer, are overlooked by the author. The author's statement that the subject of the study was considered by the author "taking into account the current state and the latest trends in the development of AI" remains poorly reasoned. And the attribution of AI philosophy to the scientific branch does not stand up to criticism and is a false judgment that does not take into account the existing demarcation lines of philosophy and science. The author rightly notes in the final conclusions that "the philosophy of AI is a dynamic ... branch of philosophical knowledge, the development of which is largely determined by the transformations of modern information and communication technologies," however, this dynamics is not traced in the main part of the study. The author remained in the formulation of the "basic" questions of the philosophy of AI at the stage of its formation in the 1960s and 1970s. To date, according to the reviewer, the use of "weak" AI technologies in science determines breakthrough achievements in the near future in the field of technological convergence (the so-called NBIC, GNR, GRIN, GRAIN, BANG, etc.), which exacerbates the philosophical problems of the possible achievement of immortality by man or a significant extension of qualitative longevity, as well as the problems of the fairness of the distribution of technological achievements, the significant restriction of civil liberties by machinery, the problems of transhumanism, new ethics, etc. Thus, it can be considered that the author has revealed the subject of research at a certain theoretical level only taking into account the author's right to defend his own interpretation of the most pressing basic issues of AI philosophy and the reader's right to disagree with such an interpretation. The research methodology is based on Alan Turing's approach, which implies the correspondence of incoming and outgoing AI neurons to human abilities, which does not take into account a number of counterarguments, including both the fundamental denial of the identity of AI to a natural analogue, and modern trends in replacing poorly studied aspects of the functioning of natural intelligence with specific machine procedures capable of simulating individual similarities. The author's selection of literature is also questionable, which does not take into account a significant amount of special research (only at the Q1 level in 2022 there are more than 800 foreign journals, not counting numerous scientific and philosophical conferences, and the author's foreign literature is represented either by Russian-language translations of recent times, or dates back to 1950, 1960, 1980, 1991). Thus Thus, in general, the methodology is subordinated to the subjective interpretation of a part of a special philosophical discourse. The relevance of the chosen topic is well argued by the author in the introduction and is beyond doubt. The scientific novelty of the research remains questionable due to the author's disregard of the distinctions between philosophy and science. The style of the text is scientific. The structure of the article generally corresponds to the logic of presenting the results of scientific research. The bibliography as a whole reveals the problematic area of research, although it suffers from ignoring a large volume of modern foreign literature on the topic over the past 3-5 years. The appeal to the opponents is correct and sufficient. The article is of particular interest to the readership of the journal "Philosophy and Culture" and can be recommended for publication. The comments made by the reviewer are of a debatable nature, and the author has the right to disagree with them.