Library
|
Your profile |
Philosophical Thought
Reference:
Maltsev Y.V.
The phenomenon of artificial intelligence in the context of the evolution of existence
// Philosophical Thought.
2024. ¹ 7.
P. 16-32.
DOI: 10.25136/2409-8728.2024.7.70674 EDN: PQTGPW URL: https://en.nbpublish.com/library_read_article.php?id=70674
The phenomenon of artificial intelligence in the context of the evolution of existence
DOI: 10.25136/2409-8728.2024.7.70674EDN: PQTGPWReceived: 06-05-2024Published: 02-08-2024Abstract: The article is devoted to the consideration of the phenomenon of artificial intelligence (AI) in the context of ontology. Structurally, the text is divided into two parts. The first part concerns the characterization of the role of artificial intelligence in modern society, a review of concepts and ideas related to the prospects for the development of general artificial intelligence, and the existential threat it poses to humanity. The second part, which examines the ontological perspective of the problem, turns to the phenomenology of being, as Heidegger noted, phenomenology itself is already a methodological concept, since it explores the things themselves and their being-in-the-world. We need phenomenology insofar as we will deal with the status that AI occupies in the structure of pure and present existence. As a result of the study, the author tries to show the legitimacy of the dominant view on the development of general artificial intelligence: it will lead to its dominance over humanity due to the inertia of processes and Murphy’s law: “If something can go wrong, then it will definitely go wrong.” Such a point of view, according to the author, is associated not only with numerous risks that are considered by “apocalypticists” (for example, E. Yudkovsky or R. Yampolsky), but also with the very structure of being as such - a permanently unfolding and increasingly complex system in which artificial intelligence is probably just the next element in the general evolution and interaction of objects and reflection. The second part of the article is based on the works of G. Hegel, E. Husserl, M. Heidegger, J.-P. Sartre, Q. Meillassoux and G. Harman. In conclusion, it is concluded that we live in an era of change in the dominant form of reason in the structure of being. Keywords: ontology, being, artificial intelligence, AI, future of humanity, risks of artificial intelligence, existential threat, threats to humanity, futurology, posthumanityThis article is automatically translated. Introduction The development of artificial intelligence is one of the problems that philosophy traditionally deals with. At the same time, it is new, global, carries existential risks, and humanity has no experience of successfully resolving emerging threats that themselves need to crystallize. Today, reflection on artificial intelligence is more popular than ever. It is engaged in by large entrepreneurs: owners of IT corporations; scientists: IT specialists, cognitive scientists, economists, lawyers, linguists; and, of course, philosophers. The points of view are polarized from extremely alarmist to more than positive. At the same time, relevant information (for example, important texts by E. Yudkovsky) is often located not in scientific articles, but on specialized forums and even on social networks, which affects the structure of the article offered to the reader. Logically, this article is divided into two parts. The first one is devoted to highlighting the current situation with the development of artificial intelligence and, as fully as possible, covers different points of view, opinions of experts and practitioners. This part introduces the problem. It is especially useful for those who have heard something, but remain rather an outsider, because not everyone is looking closely at the new era unfolding before us. The second part is the experience of philosophical reflection of the period at the very beginning of which we have to live and whose future is a cause for speculation. This part of the work is a philosophical text based on phenomenology and the works of G.V.F. Hegel, M. Heidegger, G. Harman and others, as will be discussed below. The author is aware of the controversy of this approach: combining a review with the experience of philosophizing, but based on the specifics of the subject and given the significant difference in our personal reading background, such a risk seems justified. The purpose of the article is a philosophical reflection on the ontological status of artificial intelligence. The methodology is covered in the second part of the article.
Part one. The Triumph and Risks of general Artificial Intelligence Nietzsche wrote: "I am teaching you about the superman. Man is something that must be surpassed. What have you done to surpass him?" It must be said that humanity has not had an answer to this question of the great philologist for a long time. For a long time, until 2016, the AlphaGo artificial intelligence program developed by Google DeepMind won the game of go from one of the strongest masters in the world — Lee Sedol. Traditionally, the game of go is considered one of the most difficult intellectual games and one of the serious tests for AI and its ability to match /cope with humans. Moreover, in 2017, another Google product, AlphaZero, won chess with a score of 28:0 (with 72 games tied) the best computer program Stockfish 8, designed for playing chess. Stockfish 8 had in its database the experience of mankind accumulated over hundreds of years of playing, plus data from chess programs over the past decades of their use. At the same time, the program could calculate up to 70 million operations per second against 80 thousand for AlphaZero. AlphaZero taught herself, and the pre-training time took her only 4 hours [34]. And although in 2023 man won back somewhat, nevertheless some scientists [24, pp. 18-146] consider the events of 2016-2017 to be a turning point in the development of artificial intelligence, in the relationship between man and machines, in the development of civilization. In 2021, D. Hawkins wrote: "The twenty-first century will be transformed by intelligent machines in the same way as the twentieth was transformed by computers" [26, p. 18]. And in many ways, 2023 was the year (and a milestone in the development) of artificial intelligence. AI is everywhere today. First of all, in cinematography: this includes the creation of scenes, and voice acting, as well as the "revival" of favorite actors: for example, a film with J. J. is being shot in the USA today. Dean, who died in 1955. Due to the dense development of machine technologies by the film industry in 2023, as we remember, an actors' strike took place in Hollywood demanding regulation of the relations of the acting community with employers using artificial intelligence: studios wanted to get the right to scan the image of an actor and use it later without the actor's knowledge, without paying royalties). Artificial intelligence can be a taster, help take care of birds by analyzing their motor activity, and even replaces and may in the future completely replace people's friends and sexual partners. Algorithms decide what we will watch on our smart TVs, analyze our musical, consumer and information preferences, political and religious views. And this situation will only develop: gradually, algorithms will know everything about us, our health and mood and adjust the atmosphere around us accordingly and organize our regime. One of the problems that arise in connection with this penetration of AI into our lives is the question that as AI develops, it becomes increasingly difficult to understand which things are created by it and which are the result of human efforts. In finance, artificial intelligence influences personalized banking, risk management, and fraud detection. This also includes an automatic investment advisor, which is often installed on our smartphones. At the same time, I. Musk and Y. Harari note that today few or no one knows how the economy works [24, pp. 24-25]: "One day it seems to me that the world economy is falling apart, and the next day everything is fine. I do not know what the hell is going on," Musk said during a conference on Tesla's financial results. It is likely that in the future only algorithms will be able to understand and regulate the economy, and even the reasons for refusing loans will not be clear to humans. According to IFV forecasts, AI technologies will affect 40% of jobs in the world in the near future, while this percentage will be higher among developed economies — 60%. At the same time, developed economies will gain technological advantages. In this regard, Y. Harari notes that we are in the process of another social and economic revolution, fraught with mass unemployment both within states and on a global scale [24, pp. 45-85]. The same point of view is held by M. Ford, whose work The Rise of the Robots: Technology and the Threat of Mass Unemployment [23] is much darker (or more optimistic if we manage to realize Marx's dream and offset unemployment with universal access to a significant basic income — Ford offers about 10 thousand dollars — which, however, runs into current economic constraints in which countries are no longer able to support the W elfare S tate model, and the external debt, for example, of the United States due to social spending, will double by 2047 and rise from about 80% to 150% according to the forecast of the Congressional Budget Office) than Race Against the Machine: How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy [18] and The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies [19] by E. Brynjolfsson, but differs in a deeper and more convincing analysis. From Ford's point of view, humanity will definitely lose jobs to machines. Which professions will suffer first of all, as well as the geopolitical cross—section of the problem - these issues are addressed in AI Superpowers: China, Silicon Valley, and the New World Order by Kai-Fu Li [29]. The future of the Eloi awaits humanity. But so far, man dominates and reaps the fruits of his superiority. In 2024, on January 28, I. Mask's neurotechnological company Neuralink implanted a neurochip with a neurointerface (BCI) into the human brain, allowing a person to control gadgets with the power of thought. From Musk's point of view, such devices can allow a person with disabilities (but from these positions, all people seem to be like that) to significantly increase their productivity: "Imagine that Stephen Hawking could express his thoughts faster than a high-speed typesetter or auctioneer. That's the goal," the billionaire said. However, in parallel with this, I. Musk declares that AI is already capable of much more than we imagine. According to Musk, AI already has access to Google servers at the administrator level. That is, potentially at any moment, AI can generate news that can cause a war. Why not? The most dangerous thing here is that, as research shows today, AI is more prone to the behavior of Porthos ("I fight because I fight"): towards escalation and the use of nuclear weapons, rather than de-escalation and the search for peaceful solutions [31]. AI doesn't risk anything. AI will survive the atomic war, will be able to rebuild the planet, master the Solar system and go beyond it. If some scientists (for example, Deutsch [21, p. 347]) see in this the mission of man and the tasks of human history — the development of the universe, then today it seems more likely to assume that this is a function of AI, and the history of mankind is a preface to the history of AI. Maybe AI is the last stage of the development of civilization, which we will not see anymore. Governments and societies of different countries are concerned about the spread of artificial intelligence technologies and their capabilities. In the United States, states pass local laws designed to regulate the use of AI. For example, California has adopted regulations on the use of AI in facial recognition of citizens (At the end of 2023, US President John Biden signed a special bill designed to regulate the work of artificial intelligence and mark all materials created by it. In Europe, there is an AI Act aimed at maintaining a balance between AI technologies and privacy. In addition, we are talking about international cooperation in the field of artificial intelligence control. In 2022-2023, the magazines Foreign Affairs (2022. No. 1) and Internationale Politik (2023. No. 6), popular, respectively, among the political elites of the United States and Germany, published thematic issues devoted to the problem of artificial intelligence in international relations. The magazines attempted to answer the question of how AI is changing politics, diplomacy and the army. In particular, in one of the articles, the authors note that the development of AI, to a greater extent than the development of other technologies, needs international agreements. This is due to the hyperevolutionary nature of AI development, the pace of which is astounding, and the results are not predictable. From the authors' point of view, AI technologies generate a paradox of power and can be easily used for hostile purposes: to spread disinformation, violate privacy and create powerful weapons. The evolution of AI is changing the global balance of power, complicating the political context. At the same time, according to the authors, to date, the regulation of issues related to the use of AI is completely insufficient [17]. In May 2023, more than 100 experts, including the head of OpenAI, S. Altman, signed a statement on the dangers of artificial intelligence, which consisted of only one sentence: "Reducing the risk of extinction from AI should become a global priority along with other public-scale risks such as pandemics and nuclear war." It is significant and disturbing in this regard that this year OpenAI lifted the ban on the use of ChatGPT in the army and military affairs. E. Yudkovsky believes that AI can lead to the death of all mankind and, from his point of view, we are already too late to do anything about it [38] - something should have been done ten years ago. Now the point of no return has been passed, and we can only observe whether artificial intelligence will be friendly and take us to a new level of physical and intellectual development/existence, or whether it will destroy us. Yudkovsky notes [39] that today no one understands how AI systems do what they do. Probably, everyone remembers the scandal of November 2023 at OpenAI, a company engaged in the development of ChatGPT, related to the dismissal of the CEO and rumors about the development of Q* — artificial intelligence with the ability to abstract thinking. That is, we are talking about a general AI model that is capable of simple intellectual activity, and, most importantly, self-development. The disturbing point is that at any moment AI can cross this border of simplicity and become smarter than humanity. He can progress at an incredible rate. That is, in a short time we may find ourselves in a situation where something will work thanks to us, but we will completely not understand how it works. E. Yudkovsky notes that today we have a better idea of what is inside our brain (how it works) than what is inside GPT, although we have created GPT [22], although GPT is still quite weak. B. Gates also adheres to alarmist positions; S. Hawking adhered: "In the future, artificial intelligence can develop its own will — a will that is in contradiction with ours." There are also disturbing notes in Roman Yampolsky's recent work [36]. Some people hope that everything will be fine. For example, S. Russell believes that we can set up machines so that they solve our tasks, are modest and altruistic [32]. D. Chalmers writes that if we have a chance to escape into virtual reality from our physical reality, then we must certainly take advantage of this and give life a new start [20, p. 9]. As Yudkovsky notes, there is no scientific consensus on this issue. But there is a reverse consensus, because all scenarios of peaceful coexistence of CHI and AI, in his opinion, do not stand up to criticism. AI will not share our desires and values. And our ethics. Scenarios of the death of mankind can be viewed in Yudkovsky's "List of Deaths" [37]. Actually, even according to Russell's third principle— "The ultimate source of information about human preferences is human behavior" —for AI, the extermination of humanity should seem like a natural thing: isn't this what people have been striving for for thousands of years? How can we let artificial intelligence know that the correct answer is "no"? To learn more about the threats of artificial intelligence (and ways to solve them), you can view several significant works. A wonderful review paper, Artificial Intelligence: A Modern Approach, belongs to N. Peter and S. Russell [33]: it provides an exhaustive analysis of the problem of artificial intelligence, machine/deep/transfer learning and other AI-related issues. As another work for immersion in the subject, we can recommend Artificial Intelligence: A Guide to Intelligent Systems by M. Negnevitsky [30], which gives both a general overview and examines the ethical and social problems arising in connection with the development of artificial intelligence. Both books address the threats that humanity may face in connection with AI. Turning to threats, perhaps first of all it is worth highlighting Superintelligence: Paths, Dangers, Strategies by N. Bostrom [16], where the author shares with the reader the ideas that in the event of a superintelligence (which is likely), it will be difficult or impossible to control it, which may result in a superintelligence taking over the world to achieve its own goals. This work significantly influenced the views of I. Musk, B. Gates, R. Lee, S. Altman. In particular, on August 3, 2014, Musk tweeted that AI is potentially more dangerous than nuclear weapons. The work Life 3.0: Being Human in the Age of Artificial Intelligence by M. Tegmark [35] is devoted to positive and apocalyptic scenarios, linking negative scenarios with the possibility of a discrepancy between the interests of humans and machines. S. Hawking held the same view, believing that at a certain stage of its development, AI will be able to independently improve and reproduce itself, which will come to R. Kurzweil predicts the establishment of a Singularity — the point where the intelligence of machines and people will merge — in 2045, although his views are optimistic: humanity will gain superpowers and enter a new round of its existence [28]. In some ways, this intersects with the ideas of Michio Kaku [27]. J. Hawkins also believes that humanity will be able to build a successful model of general artificial intelligence in the next two to three decades [27, p. 139]. Our Final Invention: Artificial Intelligence and the End of the Human Era by J. Barratt is based on the versions of Yudkovsky and Kurzweil and, in turn, warns mankind about the possibility of AI for self-improvement, transformation into a superintelligence and about the extermination of humanity as a result of the activity of intelligence, several orders of magnitude stronger than the universal [15]. It seems that S. Harris expressed this problem most aphoristically: "Artificial intelligence is superhuman. He is smarter than you, and there is something inherently dangerous about this for the dumber side."[25] S. Harris predicts an arms race and a possible war over AI technologies, because the first country to massively introduce them may gain a decisive advantage. This is in addition to problems with transplants and the predominance of artificial intelligence itself over humans. "The presence of superhuman artificial general intelligence creates inherent dangers for humanity, similar to the dangers faced by less intelligent species when they interact with humans"[25]. Harris suggests two scenarios related to AI: joint evolution (and here people turn into cyborgs) or independent evolution (and here computers win). I will focus on the latter and try to look at the development of artificial intelligence through ontology. But before that, it is necessary to at least briefly dwell on the arguments of the opposing side, which believes that artificial intelligence does not pose an existential threat to humanity. J. Hawkins, founder and CEO of Numenta, a brain research company and one of the authors of the "1000 brains" concept, which considers the brain as a dynamic system of cortical columns operating in a "dialogue" with models of the world, believes that artificial intelligence will not get out of human control. From his point of view, humanity is not in danger of death at the hands of super-intelligent AI, because the latter will never overcome the combined intellectual power of humanity (which is disputed by other researchers, as we saw above) and because any expansion of knowledge takes time. He believes that today we see how AI copes better with static tasks/knowledge about the world, but since most of the tasks that reality sets for us are dynamic, then humans will always be the best in this field. Hawkins' second premise is directed against the so-called "threat of misalignment of goals", according to which at some stage the goals of AI and humanity may diverge, after which AI will destroy humanity. According to Hawkins, it is easy to bring this under control at the stages of development, implementation, regulation and through lawmaking and legal practice. He believes that no one will create intelligent machines that are obviously capable of getting out of control or not responding to commands to cancel and change tasks [26, pp. 153-161]. However, it seems that in his argument Hawkins himself does not take into account the time factor and the computing power that AI will have in the future of its development. If an individual computer today has greater, albeit local and limited, capabilities than a human, then the entire global network of computers, including super-powerful ones, combined into a single intellectual sphere, will have greater capabilities than the total humanity. And those shortcomings in operating a dynamic model of the world related to self-learning and the multiplicity of models that Hawkins points out today as the moments in which humanity is stronger, these shortcomings can be leveled in time by people themselves, and then by AI. Hawkins himself and his company are making every effort to give machine intelligence systems the structure and properties of human intelligence. If we believe in the success of the Hawkins mission, then its final result should look like something that will have more intelligence than humanity due to better materials, more cortical columns, and so on. factors. J. Hawkins believes that with all the successes in the development of artificial intelligence, the latter will never have the flexibility, versatility and emotionality that are inherent in human intelligence. It seems that he and his supporters are making a mistake: they consider the future intelligent form of life (machine) in the necessary similarity to the current one (human). But it seems that this is an optional characteristic for AI. AI will build its own world, its own civilization, which will be different from ours. She will be deprived of some characteristics, but possess others. It is likely to be more resilient and more extensive, covering other planets and possibly galaxies. We can also assume that machines really will not destroy humanity and that we will be able to control them, we will be able to coexist peacefully and productively. But other aspects arise here: the threat of global war, the environmental crisis, the collapse of the Milky Way. There are scenarios in which humanity, if it is not the result of God's plan under the protection of God's blessing, will disappear, and then artificial intelligence may well turn out to be the heir and monument to humanity. Man was able to create something that could surpass him. Man was able to create something that would preserve, if not his set of genes, then his role and his knowledge, which would be the result of his existence in the world, in history. He played his part in the endless unfolding of being as an intermediate link to the next form of intellectual life. Because from the point of view of being, there is no difference which form of intelligence will dominate: living or machine. The main thing is that the linking of objects into new forms will continue. What I will say next, moving on to ontology.
Part two. Artificial intelligence in the structure of being In this case, my tools and supporting works will be the ontological perspective of the problem of artificial intelligence, therefore, I will mainly turn to the phenomenology of being, since, according to M. Heidegger, phenomenology itself is already a methodological concept, since it explores things themselves and their being-in-the-world [12, p. 27]. Phenomenology is necessary for us in so far as we will deal with the status that AI occupies in the structure of pure and present existence, and existence, when necessary. In any case, methodologically this article is based on M. Heidegger's thesis that "ontology is possible only as a phenomenology" [12, p. 25]. In this regard, the key works for this article are the texts of G.V.F. Hegel, M. Heidegger, E. Husserl, M. Buber, J.-P. Sartre and supporters of object-oriented ontology (K. Meyasu, T. Morton, G. Harman, etc.). A) To understand the role that artificial intelligence may be destined to play, or the place that it may be destined to occupy, it is necessary to turn to the very structure of being as the initial scene. We are all on the stage of being, and this stage consists of objects. We can relate in different ways to what this scene is: it is the fruit of the creation of the spirit or an exceptional materiality. But this scene surrounds us, and it declares itself by the fact and verb of its existence. At the same time, the scene itself represents darkness and anonymity. Hegel called this situation emptiness [5, pp. 72-85]. According to Hegel, and here there are intersections with Kant, being is a kind of primordial, primordial state and condition, a necessary beginning that is completely impossible to conceive, which is therefore indefinable and unknowable [5. pp. 71-72]. In this, Hegelian thought intersects with Heidegger's thought: "Being is definitively indescribable from higher concepts and unrepresentable through lower ones" [12, p. 10]. The problem of definitions that sounds here is connected with our inability to grasp the object in its entirety, and therefore with the inevitability of the partiality of any definition. Being exists as a stage and pure being by itself and locked in itself and until the moment of intervention of the signifying mind, whatever it is and to whom it belongs. "Before" any presence of dimensional presupposition and behavior lies the "a priori" of the existential structure in the existential image of care" [12, p. 206]. (Here, by caring, Heidegger means the condition of being for man. Or, as this thought develops in this work: being creates a human thought as a partner, i.e. co-being). In other words, the original being, pure being, consists of objects left to themselves, interacting with each other, but not performing the act of thinking about themselves or each other. In the language of Meyasu, we are talking about factuality [9, pp. 52-55]. We are talking about the presence of initial subject invariants that do not need any justification, since they represent the origin and as such they exist for themselves and develop within being as a set of interacting objects. Pure being exists in fact, without justification, definition and reflection, obeying physical laws and developing in accordance with them. Only reason brings into pure existence an element of corruption, an element of definition, value and meaningfulness. "Human reality is the means by which value comes into the world" (Sartre [10, p. 183]). Pure being is the beginning. The beginning is in the situation of the absence of a defining and cognizing mind, in the absence of a living and human being. The beginning is like a stage and a pile-up. This undefined beginning is nothing, from which everything arises. Everything arises not as factuality, but as meaningfulness. Yu.M. Fedorov assumed that being is folded into a semantic vacuum, the meaning of which unfolds through human activity (dialogue, love, creativity, freedom, goodness), which turns the bareness of the cosmos into the sphere of reason, where all bare/empty objects suddenly turn out to be included in the symbolic grid of culture [11]. Being-a sign receives a signification. The culture of mankind arises as a result of two Big Explosions: the explosion that gave birth to the cosmos and the explosion that gave birth to culture — the explosion of rationality, the explosion of logos. Interestingly, this idea, expressed exclusively within the framework of speculative philosophy in the 1990s, intersects with the ideas of modern physics. For example, D. Deutsch notes that "although all life is based on replicators, in reality it is built around one phenomenon — knowledge" [21, pp. 290-310]. But if the function of the living is knowledge, then pure being is being without humanity and without knowledge. This is a prehistoric existence: "a world without thinking is a world without the reality of the world" (K. Meyasu [9, p. 35]). The world of pure being is the world of It, in which there is no Self. This is a cluster of meaningless signs. But signs that are not in any/anyone's mind. Even God himself, if we accept the religious picture of the world, in this situation is only an object among objects, because He is deprived of His perceiver. Pure being is It, where It is= It. Outside the perceiving consciousness, the accumulation of objects of pure being is nothing: "simple equality with oneself", "absence of definitions and content; indistinguishability in oneself" [5, p. 84]. The nothingness of pure being turns out to be the Meyasian facticity of autonomous "things-in-themselves", doomed to eternal escape from any rationalization and conceptualization. Human attempts to grasp this reality are just ripples in the universal information space that have no way of claiming the truth. B) So, pure being is the first stage, the scenery where the action takes place. Pure being is a collection of objects that are self-contained. But an important characteristic of being is movement. "There is something that is eternally moving in non-stop motion" (Aristotle [1, p. 330]). The "things-in-themselves" piled up and spaced out are in continuous extensive (outside) and intense (inside) movement. Being is dynamic. Pure being is formed from some invariant elements: molecules, atoms, strings, etc., but these elements, forming the primacy of materiality, are incredibly changeable and create a huge variety of existing objects. The primary elements create an incredible variety of entities that are in permanent transformation. If once the universe was collapsed to a singularity and unfolded from a single point, then this process of movement and unfolding of being continues and does not stop. Being moves to assert itself. The will to be manifests itself in movement. Including in the movement of forms. "Materiality itself is a creative agent," writes J. Bennett [4]. Pure being is restless and devoid of the peace that Buddhists strive for. At the same time, it is One in diversity and movement, and that is why concepts of pantheism arise in man. Actually, today chaos theory and synergetics, molecular chemistry and neurophysiology of the brain, systems theory, etc. they tell us about self-organizing matter and the permanent movement of objects. "Movement has always been and at all times will be" (Aristotle [2, p. 220]). The basis of being is movement, is change. Being as pure being unfolds within itself through physical laws that ensure the dynamics and interaction of objects. Being is not static, but a movement that inevitably leads to complexity. The complication of elementality, the Pythagorean multiplication of one to ten, is the fundamental law and principle of being. Being tends to complexity and multiplicity: from one to two and from two to ten, from point to sphere. It is through such an unfolding of being that space (as mastered and extended) and time appear inside it ("Being is always perceptible only in terms of time," Heidegger [12, p. 19]), man and culture appear. C) the initial state of being is nothing, which is in constant unfolding and imperceptibility of itself, in isolation from itself. But in the process of its unfolding, being reaches the stage of the appearance of the living. The appearance and evolution of living things is part of the general process of unfolding the forms of being, but the key part for being itself, because along with the emergence of living things, reason arises. Yu.M. Fedorov called this the Second Big Bang. Meyasu wrote that matter, life and thought are the main registers of being [13, p. 263]. What intersects with Heidegger's point of view: "Life has its own special way of being" (Heidegger [12, p. 50]). The mind arises as a part of the living and as a cognizing, comprehending part of being, as a second being. The mind discovers and brings meaning. The mind appears as a part of being, which can name objects of being, establish connections between them and discover their new properties, create new objects of being. By connecting on earth, to create culture as a second being. Being receives opportunities for self—justification, and within it begins a dialogue of objects (where a person is one of the objects of being) and a dialogue between a person and being. With the appearance of a living being, existence acquires an existential dimension. Being takes on existence with the appearance of a perceiving mind within being itself. Therefore, human consciousness is inevitably intentional, i.e. it is necessary to have a consciousness about something [7, p. 363], it has an inherent orientation. The intentionality of human consciousness with respect to being, as programmers say, is a pre—installed program for the human mind, although in the routine of everyday life we often ignore this relationship. Thus, the logic of the unfolding of being leads to the fact that being gets the opportunity to find an answer to the question of what it is. It is possible that existence itself tends to such an evolution, although for pure existence all its objects are equal. But the emergence of the living and the mind allows being to question itself, its properties, possibilities and boundaries. Reflecting objects arise within pure being. Humanity is a reflection of being. Therefore, the objects of being are free. Therefore, the living has the potentials of freedom and creativity. And at the moment, it is man as a form of living being who is able to give existence the most complex and complex answers about himself. Probably all objects and all living objects give this information, but human information is definitely more. Being an object among objects and a part of pure being, standing in the continuity of the unfolding of being, a person is engaged in deciphering being, including trying to grasp its demand. Man invents ways of knowing the truth (which A. Badiou called truth procedures, referring to art, science, politics and love [3]), through which he consistently and gradually learns the truths about being, but at the same time he influences the transformations of being. "The value [of a person — Y.M.] lies in the very act of cognition; people have value not because of what they know, but because they know" [13, p. 269], wrote Meyasu. If, from the point of view of G. Harman, being is composed of objects [14, p. 13-14], which, according to S. Zizek, does not leave a place for the subject in the structure of being [8, p. 25-73], then in the proposed structure of being this place is successfully preserved: being is seen as pure being, which is the essence of form there are multiplicities of things-in-themselves that create rhizome and grooved space, creating an assembly of Things (Hegel's term for the materialization of pure being [6, pp. 51-52]) that interact and change, thereby constructing space and temporality, and at some stage as a result of this unfolding producing a person as a cogito, comprehending and deciphering being, giving it meaning, cognizing being within it and for it. This is the role of the living and the human in the structure of being. D) Thus, as we already know, being consists of equal objects and is in constant motion, which is the unfolding of being with parallel reflection. The living and the human being are objects of being with the function of feedback and new connections and the creation of new forms — the essence of the free and creative part of being, the actors of the second being. But in this frame, the living and the human are privileged objects among the silent objects of existence, but they do not have an ultimate advantage. Being does not care who will perform the intellectual and creative function within it. It could be a person, it could be a car. Today, man, as a link in the general evolution of being, has created and is improving artificial intelligence, which is quite likely to replace man in the structure of being as a carrier of reason and a partner of being in co-being, co-creativity, co-cognition and reflection. The evolution of being may imply a transition from inorganic forms to organic and further to machine forms, where organic forms turned out to be only a necessary link closing the circle. On the other hand, perhaps machines will somehow use protein compounds to build their information channels. In other words, artificial intelligence is quite likely the next round of the evolution of being as such. From the inanimate to the living and back to the inanimate. But if in the first case, the pinnacle of life was man, as a sentient being who served as a kind of decoding of existence for himself, now robots created by him will take the place of man, who will not be afraid of the environmental crisis, who will be able to explore other planets of the Solar system, and possibly other universes. I think we are living in a period of changing the dominant species. And this new species is robots. "Le monde east fait pour aboutir a un livre" ("Everything in the world exists to end with a certain book"), wrote Mallarmé. Perhaps AI will write a book about the history of mankind. Naturally, we have reached our limit (we have really reached it, on the one hand, we have already created everything, on the other, we can no longer do anything without machines) and dialectical removal is waiting for us through the tools we have created. This is an alarming situation for us, but it is also a monument to our genius. On the other hand, for almost 6 thousand years we have not learned to live for each other, and not against each other. It is possible that the machines will succeed in this more.
Conclusion Summing up, it should be noted that the existential threat to man from artificial intelligence is not only in those areas that analysts traditionally pay attention to: the possibility of unemployment, the possibility of de-actualization of human intelligence, the possibility of human exploitation by a machine, the possibility of destroying humanity, but it is much more serious and is associated with the logic of self-development, self-determination and self-realization of being if we look at being through the optics of F. Schelling. In this context, the very appearance of artificial intelligence, its development, significance and future role in the structure of the universe are seen as the completion of a circle beginning with the interaction of inanimate objects, passing through the stage of interaction between inanimate and living and ending with a qualitatively new stage of the relationship of inanimate objects of being, some of which will have consciousness. Being unfolds through the production of new forms and receives feedback from intelligent forms. Within the boundaries of this circle, man, as the highest part of the evolution of the living, seems to be an intermediate link, whose role was to bring inanimate objects of existence to a new level. This is an alarming, but at the same time a positive point of view: Whatever happens to humanity, it will have a more cataclysmic heir who preserves its history. References
1. Aristotle. (2006). Metafizika. Moscow: Izd-vo Jeksmo.
2. Aristotle. (1981). Fizika In. Aristotel'. Sochinenija. V 4-h tomah. T. 3, 59-262. Moscow: Mysl. 3. Badiou, A. (2003). Manifest filosofii. St. Petersburg: Machina. 4. Barrat, J. (2013). Our Final Invention: Artificial Intelligence and the End of the Human Era. Thomas Dunne Books. 5. Bennett, J. (2018). Pul'sirujushhaja materija: Politicheskaja jekologija veshhej. Perm: Gile Press. 6. Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press. 7. Bremmer, I., & Suleyman, M. (2023). AI Power Paradox: Can States Learn to Govern Artificial Intelligence – Before It’s Too Late? Foreign Affairs, 5, 26-43. 8. Brynjolfsson, Å., & McAfee, À. (2012). Race Against the Machine: How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy. Digital Frontier Press. 9. Brynjolfsson, Å., & McAfee, À. (2014). The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. W.W. Norton & Company. 10. Chalmers, D. (2022). Reality+. New York: W.W. Norton & Company. 11. Deutsch, D. (1998). The Fabric of Reality: The Science of Parallel Universes and Its Implications. Penguin Books. 12. Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization. Lex Fridman Podcast #368. Retrieved from [https://www.youtube.com/watch?v=AaTRHFaaPG8&t=0s 13. Fedorov, Ju. M. (1992). Universum morali. Tjumen': Rossijskaja AN. Sib. Otd-nie. Tjumenskij nauchnyj centr. 14. Ford, Ì. (2015). The Rise of the Robots: Technology and the Threat of Mass Unemployment. Basic Books. 15. Harari, Y. N. (2018). 21 Lessons for the 21st Century. New York: Random House Publishing Group; Spiegel & Grau, XIX.. 16. Harman, G. (2020). Spekuljativnyj realizm: vvedenie. Moscow: RIPOL klassik. 17. Harman, G. (2021). Ob#ektno-orientirovannaja ontologija: novaja «teorija vsego». Moscow: Ad Marginem Press. 18. Harris, S. Can we build AI without losing control over it? TED. Retrieved from https://www.ted.com/talks/sam_harris_can_we_build_ai_without_losing_control_over_it 19. Hawkins, J. (2021). A Thousand Brains: A New Theory of Intelligence. Hachette UK. 20. Heidegger, M. Bytie i vremja (2015). Moscow: Akademicheskij proekt. 21. Hegel, G. W. F. (2019). Nauka logiki. Moscow: Izdatel'stvo AST. 22. Hegel, G. W. F. (1959). Sochinenija: v 14 t. T. 4. Fenomenologija duha. Moscow: Izdatel'stvo social'no-jekonomicheskoj literatury. 23. Husserl, E. (2000). Logicheskie issledovanija. Kartezianskie razmyshlenija. Krizis evropejskih nauk i transcendental'naja fenomenologija. Krizis evropejskogo chelovechestva i filosofija. Filosofija kak strogaja nauka. Minsk: Harvest, Moscow: AST. 24. Kaku, M. (2018). The Future of Humanity: Terraforming Mars, Interstellar Travel, Immortality, and Our Destiny Beyond. Doubleday. 25. Kurzweil, R. (2005). The Singularity Is Near: When Humans Transcend Biology. Viking Adult. 26. Lee, K. – F. (2018). AI Superpowers: China, Silicon Valley, and the New World Order. Harper Business. 27. Meillassoux, K. (2015). Posle konechnosti: jesse o neobhodimosti kontingentnosti. Ekaterinburg. Moscow: Kabinetnyj uchenyj. 28. Negnevitsky Ì. (2005). Artificial Intelligence: A Guide to Intelligent Systems. Pearson Education. 29. Rivera, J. – P., Mukobi, G., Reuel, A., Lamparth, M., Smith, Ch., Schneider, J. Escalation Risks from Language Models in Military and Diplomatic Decision-Making. Retrieved from https://arxiv.org/abs/2401.03408 30. Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking. 31. Russell, S., & Norvig, P. (2016). Artificial Intelligence: A Modern Approach. Pearson. 32. Sartre, J. – P. (2000). Bytie i nichto: Opyt fenomenologicheskoj ontologii. Moscow: Respublika. 33. Silver, D., & Hubert, Th. Schrittwieser, J. (2018). A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science, 362, 1140-1144. 34. Tegmark, M. (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. Knopf Publishing Group. 35. Yampolskiy, R. V. (2024). AI: Unexplainable, Unpredictable, Uncontrollable. Chapman & Hall. 36. Yudkowsky, Å. AGI Ruin: A List of Lethalities. Retrieved from https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities 37. Yudkowsky, Å. (2008). Artificial Intelligence as a Positive and Negative Factor in Global Risk, Global Catastrophic Risks. Ed. by N. Bostrom, 308-345. Oxford University Press. 38. Yudkowsky, Å. Will superintelligent AI end the world? Retrieved from https://www.ted.com/talks/eliezer_yudkowsky_will_superintelligent_ai_end_the_world 39. Zizek, S., Ruda, F., & Hamza, A. (2019). Chitat' Marksa. Moscow: Publishing house. House of the Higher School of Economics.
First Peer Review
Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
Second Peer Review
Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
|