Translate this page:
Please select your language to translate the article


You can just close the window to don't translate
Library
Your profile

Back to contents

Philosophy and Culture
Reference:

General Systems Theory and Creative Artificial Intelligence

Gribkov Andrei Armovich

ORCID: 0000-0002-9734-105X

Doctor of Technical Science

Senior Researcher, Scientific and Production Complex "Technological Center"

124498, Russia, Moscow, Zelenograd, Shokin Square, 1, building 7

andarmo@yandex.ru
Other publications by this author
 

 
Zelenskii Aleksandr Aleksandrovich

ORCID: 0000-0002-3464-538X

PhD in Technical Science

Leading researcher, Scientific and Production Complex "Technological Center"

124498, Russia, Moscow, Zelenograd, Shokin Square, 1, building 7

zelenskyaa@gmail.com
Other publications by this author
 

 

DOI:

10.7256/2454-0757.2023.11.68986

EDN:

EQVTJY

Received:

09-11-2023


Published:

16-11-2023


Abstract: The article analyzes the possibilities and limitations of artificial intelligence. The article considers the subjectivity of artificial intelligence, determines its necessity for solving intellectual problems depending on the possibility of representing the real world as a deterministic system. Methodological limitations of artificial intelligence, which is based on the use of big data technologies, are stated. These limitations cause the impossibility of forming a holistic representation of the objects of cognition and the world as a whole. As a tool for deterministic description of the universe it is proposed to use empirical-metaphysical general theory of systems, which is an extension of existing general theories of systems due to ontological justification of the phenomenon of isomorphism and definition of a limited set of laws, rules, patterns and primitives of forms and relations of objects in the universe. The distinction of natural (human) and artificial intelligence is considered, including the realization of multisystem integration of intelligence in physical, biological, social and spiritual systems. A philosophically grounded approach to ensuring the evolutionary properties of artificial intelligence is formulated, based on the inclusion of non-equilibrium mechanisms through which stability is realized.


Keywords:

intelligence, subjectivity, intellectual tasks, creativity, unity, general theory of systems, multisystems, nonequilibrium stability, evolution, deterministic description

This article is automatically translated.

Introduction

 The arsenal of means of cognition of the world, the formation of new knowledge and objects in recent years has been replenished with a new tool, the possibility of the existence of which until recently was difficult to imagine. This new tool is artificial intelligence systems.

For further objective analysis of the capabilities and limitations of artificial intelligence, it is necessary to answer several questions: What is artificial intelligence? Is subjectivity necessary for him to fulfill his function of solving intellectual tasks? How is artificial intelligence related to big data? What are the limitations of modern big data technologies and how does this affect the capabilities of artificial intelligence using them? What is the difference between natural and artificial intelligence?

The answers to these questions should become the basis for expanding the capabilities of artificial intelligence, and will also allow us to identify approaches to the separation of artificial and natural intelligence.

  

Subjectivity of artificial intelligence

Depending on the level of functionality, there are three types of artificial intelligence: weak or narrow (Artificial Narrow Intelligence, ANI), strong or general (Artificial General Intelligence, AGI) and superintelligence (Artificial Super Intelligence, ASI). Narrow artificial intelligence is designed for a narrow range of intellectual tasks that they solve no worse or even better than a person. All currently existing artificial intelligence systems belong to the narrow artificial intelligence. General artificial intelligence will have capabilities in solving intellectual tasks that are close to human in diversity, as well as comparable or superior in efficiency (speed, reliability of the results obtained, etc.). Superintelligence will surpass a person in solving any intellectual problem.

As a parameter determining the level of artificial intelligence, the presence or absence of independence, self-awareness and emotions in artificial intelligence is sometimes also considered. A narrow artificial intelligence may have independence (autonomy) in solving intellectual tasks, but this independence is limited by a pre-defined (given by the developer) structure in the form of methods, algorithms, etc. An intellectual task cannot be set by a narrow artificial intelligence itself or adjusted (revised) in the decision process. A narrow artificial intelligence cannot possess self-awareness or emotions (although it can imitate them if it is programmed to imitate such an imitation). General artificial intelligence and superintelligence, in order to provide the necessary functionality in solving intellectual tasks comparable or superior to humans, must necessarily have independence. The question of whether self-awareness and emotions are necessary for general artificial intelligence and superintelligence requires additional consideration.

The presence or absence of self-awareness in artificial intelligence (along with emotions, etc.) is one of the main factors determining its subjectivity [1]. Subjectivity in philosophy is interpreted as a systemic quality of the subject, the presence of which makes the subject what he is [2]. This definition, in which subjectivity is equivalent to the self, has an ontological character. The self in this case is understood as the ability of the subject to get out from under the power of external causes and create a reproducible order of life determined from within [3]. More adequate in the context of our research is the epistemological definition of subjectivity as the ability to be the subject of activity, including in relation to objects of cognition.

The key indicator determining the functionality of artificial intelligence is its ability to solve intellectual tasks. In this regard, the question arises as to what kind of tasks should be classified as intellectual. Usually, intellectual is understood as a weakly formalized problem, i.e. a problem "for which there is no universally recognized (classical) solution, therefore, in order to solve such a problem, it is necessary to come up with a solution method" [4]. Moreover, in the process of solving such a problem, it can be revised (corrected).

The above description of the intellectual task corresponds to its qualification as a creative task. A creative task (in scientific and technical creativity) is one "for the solution of which there are still no generally accepted rules and regulations in science that determine the exact program for its solution" [5, pp. 39-40].

Is subjectivity necessary to solve intellectual (creative) tasks? The answer to this question depends on how we imagine the real world.

If we imagine the real world as a non–deterministic system in which probabilistic parameters are ontological and in which creativity is an irrational process, then we cannot get a reliable answer to the question of the need for subjectivity to solve intellectual problems. We only know that a person can solve intellectual problems and has subjectivity. The nature of the connection of these properties is unknown to us.

However, we have sufficient grounds to believe that in reality the world is deterministic [6] and it is based on mechanisms that have a rational explanation. A rational explanation can also be given for intuition, which is based on the unconscious use of analogies and patterns of forms and relationships existing in the universe. The existence of analogies and patterns is a consequence of isomorphism, through which the unity of being is manifested [7]

If the world is deterministic, everything in it has reasons and (theoretically) can be explained, then there is no qualitative difference between a computational or logical task (which a computer that does not have artificial intelligence copes with perfectly) and an intellectual task. At the same time, of course, the methods and algorithms used in solving an intellectual problem will be incomparably more complex and diverse than in solving computational or logical problems. However, all algorithms and the sequence of actions of artificial intelligence can be defined and explained. Such artificial intelligence is called explicable [8]. In this case, subjectivity is not required for artificial intelligence to perform its function of solving intellectual tasks.

 

Methodological limitations of big data technologies

To understand the nature of the information processing methods underlying artificial intelligence, consider deep machine learning systems, which are one of the initial versions of narrow artificial intelligence.

Deep machine learning, as its name implies, is based on learning: with a teacher, with partial involvement of a teacher, without a teacher, with reinforcement, etc. The object of training is artificial neural networks, currently implemented on the basis of various high-performance computing systems (graphics processors, user-programmable gate arrays, special-purpose integrated circuits, etc.), as well as on the basis of neural processors. In the future, quantum processors can also become the basis for neural networks.

A deep neural network is an artificial neural network with a large number of layers, i.e. mathematical transformations from input to output. The function of the deep neural network is to select (in the learning process) such a sequence of mathematical transformations (from layer to layer), which, with different values at the input, provides the correct result at the output. Verification of the "correctness" of the result can be carried out both by an external user ("teacher") and by means of additional algorithms of the deep neural network.

Artificial intelligence systems are inextricably linked with Big Data technologies [9]. Artificial intelligence in practice implements not only deep machine learning methods, but also most of the methods used in big data technologies, including Data Mining methods used to detect certain patterns in raw data using intelligent analysis.

The general idea of big data technologies and the methods involved in it is reduced to the formation of an information model of the object of cognition in the form of a large array of data ordered on the basis of identified empirical relationships, correlation of various fragments, non-deterministic (by reason and mechanism) trends. These data arrays are used instead of deterministic representations about the object of cognition: descriptions of its elements, structure, internal and external relationships, laws of nature implemented in the object, etc. Methodological approaches that are currently the basis of big data technologies (and translated into artificial intelligence systems created on their basis) are the result of the dissemination of ideas in the scientific community positivism, incompatible with the formation of a holistic view of the universe.

Failures in the formation of a holistic picture of the physical world and nature as a whole gave rise to a tendency towards empiricism in philosophy, doubts arose about the possibility of building a holistic picture of the world, and after that – the question of whether such a construction is necessary. The concept of positivism has become an expression of the emerging trends of abandoning the construction of a holistic picture of the world, limiting cognition by the results of experience, its systematization and generalization [10]. The concept of positivism has thus acquired its current dominant position not because of objective philosophical merits, but as a result of the capitulation of philosophers to the complexity and inconsistency of phenomena and objects in nature.

Since creativity (including the solution of intellectual tasks) is based on the conscious or unconscious use of the isomorphism of the world, which is an expression of its integrity, existing approaches to the creation of artificial intelligence (based on big data technologies and all the methods used in them) are unpromising. They can allow us to solve only some primitive intellectual tasks, for which it is enough to identify empirical connections and trends. We can implement narrow artificial intelligence based on this approach (with significant limitations), obviously there is no higher-level artificial intelligence.

Big data technologies are steadily expanding their presence in our lives. They have become the basis of machine learning, and modern artificial intelligence systems are being built on their basis. The prevailing opinion is that this is how cognitive systems should be formed, this is how artificial intelligence will be able to approach the natural (human). As we found out, this opinion is deeply erroneous.

To build a general artificial intelligence or superintelligence, it will be necessary to go beyond the methodological limitations formed within the framework of big data technologies. At the same time, we have the only reference point – human thinking, which probably does not have such limitations.

The experience of cognition shows that a healthy human intellect (and this is demonstrated by the entire history of the development of the human knowledge system) does not look for trends, correlation of data in the surrounding world. Moreover, the fixation of thinking on the search for connections (often non-existent) is often evidence of a mental disorder (depression, schizophrenia, delirium, etc.). A healthy human intellect is not looking for trends, but meanings. The basis of creative thinking is a meaningful or unconscious understanding of the integrity of the world, the existence of isomorphisms in it, i.e. the similarity of forms and laws in various subject areas, at different levels of the universe.  This makes it possible to use analogies and generally determines the productivity of traditional thinking.

Natural (human) intelligence does not use the methodology of big data technologies. Paradoxically, the human mind, realized in the real world, the canvas of which in the knowledge system is woven from empirical experience, while being deductive, tends to a metaphysical representation of the world. And this is inevitable, because the world is one (has integrity) and a person, as part of the world, must accept it (even unconsciously). After all, the success of a person, and this is well known, is largely determined by his integration into the world, i.e. embedding as an element in an integral system of the world.

 

Possibilities of the general theory of systems

A characteristic feature of modern scientific and philosophical ideas is the practical impossibility of combining them into an integral system of knowledge, within which contradictions and inconsistencies arising in the process of development can find their solution according to the logic of the integrity of the universe.

The danger of fragmentation of knowledge lies in the uncertainty of the prospects for scientific, technological and social development. This uncertainty is generated by the limitation of ideas within the framework of individual small subject areas, as a result of which a broad understanding of the context of the events taking place, long-term trends of both development and degradation are beyond the field of view of scientists, philosophers and public figures.

The general theory of systems has the potential to take on the function of promoting the concept of the integrity of the world. In its present form, it is strongly susceptible to empiricism, but this flaw can be eliminated with proper development of the theory, including by building its ontological foundation, which allows determining the genesis of the phenomenon of isomorphism, through which the integrity of the universe is empirically manifested. Currently, work is underway on the creation of a general theory of systems (empirical-metaphysical general theory of systems), free from the identified shortcomings and supplemented with the necessary ontological part [11].

The empirical-metaphysical general theory of systems uses two complementary approaches to cognition, the first of which can be conditionally called "ontological", and the second – "epistemological". According to the "ontological" approach, the forms and laws of the universe can be logically deduced from the primary properties of being (which can be qualified as a priori knowledge). The epistemological approach is based on the use of sets of patterns and primitives to describe objects, as well as laws and properties revealed in the practice of cognition (secondary laws and properties) without determining their genesis and internal mechanisms. The concepts of patterns and primitives can be defined as follows: patterns are a representative, but limited in number, set of patterns of forms and relationships of elements within a system, widely distributed in various subject areas; primitives are a set of elements underlying the formation of all systems.

The empirical-metaphysical general theory of systems includes the internal mechanisms of development, detailing and increasing the reliability of the results of cognition. The formulated primary and secondary laws of being, the identified groups of patterns and primitives are not unambiguously and definitively fixed. They can be refined, detailed and expand their acceptable interpretation. At the same time, the logic of building an empirical-metaphysical general theory of systems should be preserved.

The methodology of the general theory of systems is insufficient for the final solution of the problems. The general theory of systems, which is a part of the theory of cognition, serves only to choose the direction and area of the search for solutions, to determine possible solutions, the admissibility of certain solutions, etc., i.e. it is only one of the necessary tools in the arsenal of means of solving intellectual problems. However, when solving intellectual tasks (especially complex ones, corresponding, according to modern classification, to general artificial intelligence or superintelligence), this tool is indispensable, since the knowledge formed by it cannot be obtained from the collected empirical data about the object of cognition, but follow from ideas about the integrity of the world, formalized in the form of primary properties, basic and primary laws of being, known patterns and primitives, as well as secondary laws and properties of being. Until now, the human intellect has acted as an instrument of creative understanding based on the integrity of the world. The development and formalization of the methodology of the general theory of systems will allow at first partially, and in the future completely eliminate dependence on a person in solving intellectual problems. And there is no threat to the interests of humanity in this, just as there was no threat in the appearance of computing machines for solving computational and logical problems.

In connection with the assessment of the capabilities of the general theory of systems, it makes sense to consider the question of superintelligence. We confidently speak of this third kind of artificial intelligence as something that will definitely be created sooner or later. In fact, the appearance of superintelligence is optional. And the reason is not technological difficulties, but the limited qualitative complexity of the world.

Modern philosophy and science proceed from the ideas of unlimited (in the context of positivism – indefinable) complexity of the universe. If this were the case, then, indeed, an increasingly extensive artificial intelligence would be required to know the world, but at the same time its capabilities would never be enough to know the world.

In our opinion, the task of cognition of the world is both more complex and simpler. The difficulty lies in the fact that a reliable representation of the real world in the knowledge system is impossible due to the untranslatability from the language of being into the language of the theory of knowledge and science [11]. The theory of knowledge and science are based on generalized concepts and the probability of events, but in reality they do not exist – everything is concrete: it exists in a single copy at a given time. The system of knowledge about the world is a model of reality that corresponds to it only by a limited set of features that are accepted (reasonably or unreasonably – it is not always clear) as essential. On the other hand, although we cannot accurately represent the world in the knowledge system, however, its qualitative description is possible through the limited set of methods and tools available in the general theory of systems. Even at the current level of its development, we are able to adequately describe the world, in the future the detail of the description will increase. The paradox is that the world is not very complex qualitatively and superintelligence is not required for its qualitative description. At the same time, an accurate quantitative description is almost impossible, and where we strive for it (with more or less success), difficulties are associated with a large amount of calculations (using strictly defined algorithms), and not with their intellectualization.

So, it seems logical to exclude superintelligence from artificial intelligence options. As we found out, the computing power of artificial intelligence is not the main determining criterion for it. Is it possible that the division of artificial intelligence into narrow and general is also not optimal?

The fundamental difference between narrow and general artificial intelligence lies in the "width" of the intellectual tasks being solved. At the same time, as our reflections have shown, general artificial intelligence should be based (along with existing data analysis tools) also on the general theory of systems. This gives the general artificial intelligence the ability to verify the intellectual task assigned to it and the resulting solutions within the logic of the integrity of the world (correct or incorrect statement of the question, whether there are analogues or not, whether the solution is consistent with existing patterns, primitives and laws or not, etc.). This gives the general artificial intelligence the ability to adjust the formulation of an intellectual task and to choose an acceptable solution from several available ones. At the same time, its defining characteristic is not the ability to solve a wide range of intellectual tasks, but creativity, which consists in going not only beyond the limits of the given algorithms in solving the problem itself, but also beyond the limits of the problem itself.

In our opinion, it is reasonable from the point of view of determining the specifics of artificial intelligence to divide it into two types: deep machine learning systems and creative artificial intelligence.

 

Multi-system integration

To define artificial intelligence systems, as their name implies, a systematic approach is used. The explication of intelligence in the general case (i.e., both artificial and human) can also, according to the authors, be built on the basis of a systematic approach. Since, probably, human intelligence has a significantly more complex organization than artificial intelligence, it will focus on human intelligence when determining the structure and components of intelligence.

At first glance, the identification of intelligence with the cognitive system looks reasonable. A cognitive system (from the Latin cognito — cognition, recognition, familiarization), from the point of view of philosophy, is a multilevel system that performs the functions of recognizing and remembering information, making decisions, storing, explaining, understanding and producing new knowledge [12]. A cognitive system is an information system (implementing information processing) supplemented by some extensions, i.e. elements that provide input data. An example of such additional elements in humans are sensory organs, in artificial intelligence – sensors and other measuring devices integrated with the cognitive system.

The cognitive system is the most important component of intelligence, however, at least in the case of human intelligence, it is not identical to it. Human intelligence cannot exist solely as an information system. It is integrated into a large number of systems: into the physical world, into the biological system of the thinking individual and the system of the biological species to which the individual belongs, into the personality (of the social individual) and the social system. In addition, intelligence as an information system is integrated into the human knowledge system, including its paradigmatic (ethical, religious, various scientific, etc.) and local (country, national, class, etc.) subsystems.

The multi-system integration of human intelligence into many different systems determines the formation of needs: biological, spiritual (intellectual) and social. Needs, in turn, are a necessary condition for the formation of emotions. Emotions are the psychophysical reaction of a subject (person) to objects of sensory perception in the context of satisfaction or dissatisfaction with his needs [13, volume 4, pp. 436-437].

The integration of human intelligence into many interconnected systems is not passive. It involves the implementation of feedback mechanisms (both negative and positive), mutual influence between intelligences and other actors of interconnected systems, etc. As a result, the human intellect reproduces the integrity of the world in the course of its existence. Therefore, a person can substantially understand the integrity of the world even without its formalization in the form of a general theory of systems. And that's why a person has intuition and is capable of creativity.

Multi-system integration of artificial intelligence, obviously, will never be the same as that of natural intelligence. It will not be spontaneous, it does not involve mechanisms that determine the formation of needs and emotions. And to realize its function of solving intellectual tasks, artificial intelligence does not need this. Instead of an empirical incomplete model of the integrity of the world in the form of multi-system integration inherent in human intelligence, artificial intelligence should be endowed with direct systematized knowledge about the integrity of the world in the form of a general theory of systems (in its completed final form).

Will artificial intelligence, limited in integration into some existing systems, be able to evolve? Human intelligence, as a living system, ensures its stability and development through the mechanism of stable disequilibrium, the inevitable consequence of which is evolution [14, p. 32]. Artificial intelligence is not alive, but evolution is possible not only for "living" (biological, social, economic) systems, but also for many dynamic inanimate systems (for example, chemical). The mechanisms of disequilibrium through which stability is realized should be incorporated into the artificial intelligence system during its formation. In this case, it will evolve, but it will remain inanimate – without needs, without emotions and will never pose a threat to humanity.

A necessary condition for the practical implementation of creative artificial intelligence, comparable or even superior to humans in solving intellectual problems, is the presence of an adequate general theory of systems, a possible variant of which is the above-mentioned Empirical-metaphysical general theory of systems [11]. Only in this case will defragmentation of the knowledge system and deterministic solution of intellectual tasks by both human and artificial intelligence become possible.

Today, only the human mind, by virtue of the biological, social and intellectual integration of a person into the world, has the ability of its holistic perception, intuitive search and finding logically undetectable patterns and connections in it. If the forms and laws defining the integrity of the world are formalized, man will lose his advantage over artificial intelligence. The latter, of course, will not become identical with the human, which is inextricably linked with biological, social and spiritual needs, which the machine will never have, but the intellectual monopoly of man will be destroyed.

The future of humanity is difficult to predict, but probably the development of knowledge has no alternative. Artificial intelligence, if its improvement is based on the defragmentation of knowledge through the general theory of systems, will not be a threat to humanity. On the contrary, such creative artificial intelligence will become a promising tool for expanding the knowledge system and a significant factor in the development of human civilization.

The human intellect is the carrier of subjectivity and the human self. The purpose of existence of any living being is to live. In relation to human intelligence, this goal is to realize all the diversity of their needs and solve the task of satisfying them. The fact that artificial intelligence can solve some intellectual tasks better than a person does not have fundamental significance for an individual. This is important for science, technology, and economics, and through them, of course, it has an impact on a person. This expands the possibilities for a person to satisfy his biological, social and intellectual needs, but does not form them.

The question of replacing a person with artificial intelligence is meaningless. Let's say that the achievements of science have made it possible to clone a person and record the memory of the original to the clone. For other people, a clone can be a replacement for an original, but not for the one who was cloned. He doesn't care that there is another one like him, and he won't agree to be replaced by a clone. Man and human intelligence are valuable to themselves and it does not matter whether they can be replaced with a better or just an alternative version or not.

The development of artificial intelligence systems, even if devoid of subjectivity, does not prevent them from being able to simulate human behavior. And, obviously, the demand for such artificial intelligence will increase. Like any self-deception, this is a harmful phenomenon, but it cannot pose a significant threat to people.

The creation of artificial intelligence endowed with subjectivity is an elusive goal and at the same time useless from the point of view of solving intellectual problems. Nevertheless, even if work in this area is prohibited by law, such systems are likely to be created. It remains to be hoped that the relatively low prevalence of such systems (limited by the scale of laboratories) will allow avoiding those potentially possible dangerous consequences that arise when intellectually weaker subjects (people) dominate over supposedly stronger subjects (artificial intelligence endowed with subjectivity).

 

Conclusions

The research presented in the article allows us to draw the following main conclusions:

1. One of the key issues that arise when determining the opportunities and dangers associated with the development of artificial intelligence is the question of its subjectivity. The development of the methodology of cognition through the formation of an adequate general theory of systems, assuming the limitation of the qualitative complexity of the universe, will make the subjectivity of artificial intelligence systems optional for them to perform their function of solving intellectual tasks.

2. Depending on the level of functionality, there are currently three types of artificial intelligence: weak or narrow, strong or general, and superintelligence. Due to the limited qualitative complexity of the universe, it seems logical to exclude superintelligence from the variants of artificial intelligence. Reasonable from the point of view of determining the specifics of artificial intelligence is its division into two types: deep machine learning systems and creative artificial intelligence.

3. The existence of natural intelligence presupposes its multi-system integration into physical, biological, social, and spiritual systems. Through this integration, modeling of the integrity of the world localized in natural intelligence, conscious or unconscious building of a system of analogies and patterns based on this model is realized. The consequence of this is that the human intellect acquires the ability to create, including solving intellectual problems.

4. Multi-system integration of artificial intelligence will be significantly limited. In particular, it should not generate mechanisms that cause the formation of needs and emotions. At the same time, artificial intelligence should be endowed with direct systematized knowledge about the integrity of the world in the form of a general theory of systems (in its completed final form).

5. It is possible to give evolutionary properties to artificial intelligence, which is limited in integration into some existing systems, if, during its formation, mechanisms of disequilibrium are laid into it, through which stability is realized. In this case, artificial intelligence will evolve, but it will remain inanimate – without needs, without emotions and will never pose a threat to humanity.

References
1. Alekseeva, I.Yu. (2020). Subjectivity of artificial intelligence: old questions in new contexts. Information Society, 6, 2-6.
2. Radchenko, E.V., & Rang K.A. (2012). Understanding of subjectivity in philosophy and linguistics. Bulletin of SUSU, 2, 74-78.
3. Girenok, F.I. (2010). Self. Lomonosov Knowledge Foundation Encyclopedia. URL: http://www.lomonosov-fund.ru/enc/ru/encyclopedia:0127733
4. Prutskov, A.V. (2020). Modern problems of artificial intelligence systems. Cloud of Science, 7(4), 897-904.
5. Rapacevich, E.S. (1995). Dictionary-reference book on scientific and technical creativity. Minsk: Entonym LTD.
6. Gribkov, A.A. (2023). The Simplest Material Structures – the Consequences of the Primary Properties of Being. Society: Philosophy, History, Culture, 6, 23-29.
7. Gribkov, A.A. (2023). Patterns and primitives of the empirical-metaphysical general theory of systems. Society: Philosophy, History, Culture, 5, 15-22.
8. Raikov, A.N. (2022). Subjectivity of Explainable Artificial Intelligence. Philosophical Sciences, 65(1), 72-90.
9. Aseeva, I.A. (2022). Artificial intelligence and big data: ethical problems of practical use. (Analytical review). Social and Humanities. Domestic and foreign literature. Ser. 8: Naukovedenie, 2, 89-98.
10. Sultanov, K.V., Korolkov, A.A., Puyu, Yu.V., Rabosh, V.A., & Strelchenko, V.I. (2017). Positivism philosophy of science: evolution of the subject and concepts. Society. Environment. Development (Terra Humana), 1(42), 25-32.
11. Gribkov, A.A. (2023). Empirical-metaphysical approach to the construction of a general theory of systems. Society: philosophy, history, culture, 4, 14-21.
12Philosophy: Encyclopedic Dictionary. (2004). Edited by A.A. Ivin. Moscow: Gardariki.
13. Stepin, V.S., Huseynov, A.A., Semigin, G.Y., Ogurtsov, A.P. et al. (2010). New philosophical encyclopedia: In 4 volumes. Moscow: Mysl.
14. Bauer, E.S. (1935). Theoretical Biology. Moscow-Leningrad: Izd. VIE.

Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

The reviewed article is a reflection of the author's deep reflections on the possibilities of designing various artificial intelligence systems (the author adheres to the distinction of "narrow intelligence", "general intelligence" and "superintelligence"), the interrelation of the problem of artificial intelligence achieving the status of a subject and solving creative tasks, the importance of "big data" technologies for understanding the nature and limitations of capabilities existing artificial intelligence systems, as well as the relationship between the problem of solving creative tasks by artificial intelligence and the structure of reality (whether it is exhausted by rational connections, and whether it is "decomposed" in a sequence of cause-and-effect relationships). The philosophical components of the "problem of artificial intelligence" are mainly related to understanding the possibilities of its evolution in the direction of "natural intelligence". Today's images of artificial intelligence belong to the class of "narrow intelligence" developments, within their framework the task, as well as the methodology of its solution, are taken by artificial intelligence from the outside, from a person. The simplest way to develop artificial intelligence technologies, it seems, is to try to construct an "artificial subjectivity", endow artificial intelligence with emotions and will and, eventually, self-awareness. The author believes, however, that such a statement of the question is meaningless in theory and useless in practice. From our point of view, the possibilities for constructive discussion of this problem remain extremely limited, and we cannot achieve proper clarity in understanding the very conceptual content of the expressions "emotions", "will" or "self-awareness" in relation to artificial intelligence. Next, the author focuses on a more specific issue, which, from his point of view, can already receive a fairly definite solution in modern conditions, namely, the question of whether it is necessary to assume the "subjectivity" of artificial intelligence in order for it to solve creative tasks. The answer to this question, the author believes, depends on how we imagine the real world. If we assume that the world as a whole is, as the author puts it, "deterministic", or, more precisely, if it is based solely on rational relationships, then the solution of creative tasks, the author believes, turns out to be possible for artificial intelligence, even regardless of the problem of achieving the goal of constructing its "subjectivity". In this case, the "creative potential" of artificial intelligence does not necessarily have to be infinite (as in man himself), which means that it is possible to imagine a situation in which the "creative principle" will be amenable to programming within certain limits. The author also touches on an interesting question about the importance of "big data" technologies for the development of artificial intelligence, and he (in our opinion, justifiably) assesses the role of these technologies critically: "Since creativity ... is based on the conscious or unconscious use of the isomorphism of the world, which is an expression of its integrity, existing approaches to creating artificial intelligence (based on big data technologies and all the methods used in them) are unpromising. ... We can implement narrow artificial intelligence based on this approach (with significant limitations), obviously there is no higher-level artificial intelligence"; "To build a general artificial intelligence or superintelligence, it will be necessary to go beyond the methodological limitations formed within the framework of big data technologies. At the same time, we have the only reference point – human thinking, which probably does not have such limitations." The reviewed article is an original study, it may be of interest to a wide range of readers. Individual errors of a stylistic nature ("... connections (often non-existent) are often ...", etc.) can be corrected in a working order. I recommend accepting the article for publication in a scientific journal.