Translate this page:
Please select your language to translate the article


You can just close the window to don't translate
Library
Your profile

Back to contents

Philosophy and Culture
Reference:

Synergetics of artificial cognitive systems nonequilibrium stability

Gribkov Andrei Armovich

ORCID: 0000-0002-9734-105X

Doctor of Technical Science

Leading researcher; Scientific and production complex 'Technological Center'

124498, Russia, Moscow, Zelenograd, Shokina Square, 1, building 7

andarmo@yandex.ru
Other publications by this author
 

 
Zelenskii Aleksandr Aleksandrovich

ORCID: 0000-0002-3464-538X

PhD in Technical Science

Leading researcher; Scientific and production complex 'Technological Center'

124498, Russia, Moscow, Zelenograd, Shokina square, 1, building 7

zelenskyaa@gmail.com
Other publications by this author
 

 

DOI:

10.7256/2454-0757.2024.6.70887

EDN:

MJXODY

Received:

29-05-2024


Published:

10-06-2024


Abstract: The article explores a set of issues determining the synergetics of artificial cognitive systems: conditions for the realization of non-equilibrium stability of systems, synthesis options of artificial cognitive system, as well as mechanisms of self-organization of consciousness formed on its basis. Artificial cognitive systems are proposed to include not only artificial intelligence systems imitating human thinking, but any multilevel systems that perform the functions of recognizing and remembering information, decision-making, storage, explanation, understanding and production of new knowledge. The defining property of a cognitive system is the ability to make decisions. It is shown that the content of the cognitive system is consciousness, interpreted within the framework of the information concept as an information environment in which the extended model of reality is realized. Consciousness can be qualified as an open dynamic system. The state of such systems is determined by the processes occurring in them (thought processes – in the case of consciousness). For such systems, called "living", there is a realization of mechanisms of stable disequilibrium. The implementation of an artificial cognitive system is technically carried out on the basis of an artificial neural network, and organizationally – according to the actor or reactor model. Self-organization processes determining the synergetics of consciousness are a special case of the extreme principle, which is a consequence of the law of excessive reaction of supersystems, regulating the existence of dissipative systems at the expense of supersystem resources. The initiation of self-organization of consciousness is carried out by the process of thinking, which, due to the non-equilibrium stability of consciousness, does not stop. Thus thinking can be interpreted as a process of varying system parameters in search of suitable ones for extended modeling of reality.


Keywords:

synergetics, self-organization, consciousness, cognitive system, stable disequilibrium, extreme principle, system, supersystem, dissipation, reality model

This article is automatically translated.

Introduction

The development of mankind in recent decades has enriched culture with a wide range of new tools for understanding the world and creating material benefits. One of the areas where humanity has achieved significant success, which can have the greatest impact on civilization, is the field of information technology, an important component of which are artificial cognitive systems. 

What is currently understood by artificial cognitive systems? The philosophical dictionary defines them as multilevel systems that perform the functions of recognizing and remembering information, making decisions, storing, explaining, understanding and producing new knowledge [1]. In current scientific publications, artificial cognitive systems are defined somewhat simplistically – as artificial intelligence systems that mimic human thinking ("explanatory artificial intelligence" [2]), which can be used as an auxiliary tool of cognition. According to the authors, artificial intelligence systems are a special case of artificial cognitive systems, which can differ significantly in their ability to solve intellectual problems. The defining property of artificial cognitive systems is their ability to make decisions: not to broadcast the decision of a human operator, but to make decisions autonomously, according to changing external conditions and their internal properties, including nested algorithms. In the context of the proposed presentation, an artificial cognitive system is both a creative artificial intelligence [3], comparable to a human in solving intellectual tasks, and an airplane autopilot or an automatic car parking system.

Artificial cognitive systems are an important element of modern technical civilization. In the short term, their role will grow; in the medium term, they will become the main means of achieving the goal of civilization, which is to meet human needs: biological, social and intellectual (spiritual).

The content of the cognitive system (both natural and artificial) is consciousness. The authors' research has shown [4] that within the framework of the information concept, consciousness can be interpreted as an information environment in which an expanded model of reality is implemented. Consciousness is an information system, its elements are information objects that reflect reality in a complex way. As a result, consciousness is not something material, although it is formed as a result of processes in reality and requires a physical carrier of consciousness (the central nervous system or an artificial cognitive system). In addition, in the general case, the existence of consciousness does not necessarily imply the presence of self-awareness or subjectivity.

Approaches to the definition of natural and artificial cognitive systems are fundamentally different. This is due to the difference in the tasks being solved: analysis of the existing system – in the case of natural cognitive systems; synthesis of a new system – in the case of artificial cognitive systems. The analysis of the existing system is a solution to a direct problem, the synthesis of a new system with specified parameters is an inverse problem [5].

For a natural cognitive system in the form of human intelligence, the solution to a direct problem looks like this: a set of neurons connected to a neural network (in the central nervous system), under certain conditions (the human sensory system, external influences, etc.), is organized and forms an integral result in the form of an element of consciousness (an information object). The specified sequence of formation of information objects cannot be determined in detail, however, the currently existing (incomplete) knowledge in the field of neurophysiology allows us to evaluate the described mechanism as qualitatively corresponding to reality [6].

For an artificial cognitive system, the solution to the inverse problem looks like this: for a certain information object in consciousness, it is necessary to find an appropriate technical implementation that generates it; for complex artificial cognitive systems, this technical implementation includes determining the parameters of a neural network, data processing algorithms, and much more. In most cases, such an inverse problem cannot be solved directly. The only possibility is to "select" a solution based on varying parameters, i.e. repeatedly solving a direct problem until the result corresponds to the required one. To do this, it is necessary to represent the problem being solved in the form of a virtual model, where information objects (not necessarily correlated with reality) are used instead of real objects.

The mechanism of determining the state of an artificial cognitive system based on varying the parameters of a virtual model gives its formation an evolutionary character. As a result, it will be governed by the principles of synergetics – the self-organization of an open system. At the same time, the preservation of the system will be ensured on the basis of stable disequilibrium.

The complex of issues determining the sustainable existence of artificial cognitive systems includes the conditions for the realization of non-equilibrium stability of systems, options for the synthesis of an artificial cognitive system, as well as the synergetics of consciousness formed on its basis. We plan to consider these three issues in this article. This will allow us to form a philosophical basis for further research in the field of artificial cognitive systems that are close in their synergetic capabilities to natural ones.

 

Nonequilibrium stability of systems

Most systems, which are the subjects of analysis of various specific sciences (physics, chemistry, engineering and other sciences), ensure their stability as a result of balancing opposite processes (expansion and compression, destruction and unification, heating and cooling, etc.). Studies of "living" and open dynamic systems have shown that a fundamentally different mechanism is possible ensuring sustainability. For a "living" system, a state of equilibrium (rest, absence of changes) means death – the end of existence and loss of stability: a dead living being can no longer be preserved, the processes of maintaining stability stop working, decomposition begins. To build "living" systems, hikes based on the use of the principle of stable disequilibrium and the concept of dynamic kinetic stability are used.

The principle of stable disequilibrium was first formulated by Erwin Bauer [7, p. 32]: "All and only living systems are never in equilibrium and constantly perform work against the equilibrium required by the laws of physics and chemistry under existing external conditions due to their free energy." Further studies of systems with stable disequilibrium have shown that such systems are not necessarily biological (i.e., really alive), but can also be chemical (nonequilibrium thermodynamics by P. Glensdorff and I. Prigozhin [8] are engaged in their research), economic [9] and other open systems [10].

One of the practical mechanisms for ensuring stable disequilibrium of complex "living" systems is dynamic kinetic stability (DCS): "... this type of stability, applicable exclusively to constantly replicating systems, whether chemical or biological, follows directly from the powerful kinetic nature and inherent instability of the replication process ..." [11], "... the concept of DCS it is completely different from the usual type of stability in nature – thermodynamic stability..." [ibid.], "... to observe the specific behavior of replicating systems, it is necessary to constantly maintain conditions far from equilibrium..." [12].

To understand within the framework of the philosophy of the nature of stable disequilibrium, let us ask the question: "In what case is the stability of a system based on balance and equilibrium impossible?". There can probably be only one answer: in the case when the content of the system (as an object of cognition) is determined not by elements, but by processes, which, in turn, require disequilibrium for their maintenance (since every process is a change that is initiated by a difference in the properties of the system).

Why are processes the main content of "living" systems, and not elements and structures? Why are elements and structures possible in inanimate nature, but when nature "breathes life" into a system, it simultaneously makes the processes taking place in it the main thing for the preservation of the system? It is impossible to give an exhaustive answer to these questions, but a partial answer is the contradiction between evolution and "living" systems with a steady increase in entropy in the world. Evolution corresponds to a local decrease in entropy and therefore, for the most complex dynamical systems (for example, "living" ones), it cannot be fixed in the form of stable equilibrium forms. Such "living" systems will inevitably collapse, subject to a steady increase in entropy, if the processes that generate them do not continue. While the processes generated by the imbalance in the "living" system are going on, it continues to exist. The super-system, of which the "living" system is a part, spends its resources on it, the entropy of the super-system increases, and in the "living" system there is a local decrease in entropy. The process of further local decrease in entropy corresponds to evolutionary development.

In the context of defining the system-suprasystem relationship, it should be noted that the division of systems into "living" and inanimate, as well as the division of systems into open and closed (isolated) is exclusively epistemological. In reality, all systems are open, dissipative (in the terminology of P. Glensdorf and I. Prigozhin) and are determined by processes. The primacy of processes may not be revealed at the level that is studied in the process of cognition of the system, and in this case it is correctly described as a set of interrelated elements. However, the primacy of processes in defining this system will necessarily manifest itself at the level of a super-system or a super-system over a super-system, etc. At the same time, each system (except the Universe) exists at the expense of the resources of the suprasystem [13]. This means that of the three known variants of self-organization (dissipative, described on the basis of a synergistic approach; conservative, corresponding to ordering under equilibrium conditions, used in supramolecular chemistry; continuous, occurring in the system due to internal work against equilibrium), only the variant of dissipative self-organization is reliable. The visibility of other options arises due to the limited depth of the analysis, from which the essential processes that determine the changes being undergone by the system under study are excluded. Taking into account these processes would show that in all cases the systems are open and dissipative.

Consciousness, regardless of the implementation of the carrier (central nervous system, artificial neural network, etc.), is a "living" system determined by the (mental) processes taking place in it. At the same time, not only consciousness, but also its physical carrier ensures its existence on the basis of stable disequilibrium. This is true for the natural cognitive system: the central nervous system, formed in humans by the brain and spinal cord, is undoubtedly a "living" system. This is equally true for an artificial cognitive system: a computer is not a static disequilibrium system – in order to realize its functional purpose, electromagnetic processes must take place in it, fueled by an external energy source.

 

Artificial cognitive system

The synthesis of an artificial cognitive system is a complex task, the initial parameters of which depend on the choice of architecture and the variant of its implementation.

The choice of the architecture of an artificial cognitive system is determined by its preferred implementation based on a neural network. Learning ability, versatility, autonomy, speed, etc. the properties necessary for a cognitive system are provided in the case of decentralization of management, which is best implemented using an artificial neural network (although this solution is not without alternative).

The practical implementation of a decentralized system based on a neural network involves the use of an actor or reactor model (relational actor model) [14]. The actor model is based on a software and mathematical representation of the system in the form of a set of actors - autonomous objects with various properties exchanging messages with other actors for joint management of the system [15, 16]. Actors can represent virtual entities (for example, objects in the information environment – consciousness), or have a physical implementation in the form of a processor or other device. In the relational actor model [17, 18], management is carried out not by exchanging messages, as in the actor model, but by reacting to events.

 As we have already indicated, the physical (instrumental) and virtual (simulation) implementation of actors is possible. In the case of artificial cognitive systems of not the highest complexity, for example, performing control of technological equipment, it is promising to use real physical elements as actors – sensors, converters, computing modules, integrators, etc. In this case, the maximum control performance is achieved. As the cognitive system becomes more complex, it becomes less deterministic, which in the context of solving the problem of synthesis (artificial cognitive system) means the inevitability of the transition to the use of information objects as actors – virtual entities in consciousness. At the same time, there are wide opportunities both for varying the parameters of the cognitive system that determine consciousness, and for spontaneous development and complication of consciousness – synergetics.

Distinct manifestations of synergetic mechanisms in artificial cognitive systems, similar to those implemented in human consciousness, are revealed when teaching even the simplest artificial intelligence systems – machine learning systems: "neural networks that learn using error back propagation mechanisms sometimes show surprisingly "human-like" results even where it comes to incorrect or erroneous behavior" [19].

The basis for the theoretical definition of the synergetics of artificial cognitive systems can be the synergetic interpretation of cognitive activity proposed by G. Haken [20, pp. 243-314], a synergetic approach to the study of complex nonequilibrium systems [21, pp. 36-37] and the hierarchy of instabilities in self-organizing systems and devices [22, pp. 36-38]. The proposed prof. Tsvetkov V.A. the concept of information synergetics [23]. The key concept of information synergetics is the information field, in which real objects are reflected in the form of information models, real processes are reflected in information relations. Information processes and relationships in the information field interact, generating new objects and relationships. Information processes, information interactions, and information relations determine the dynamics and self-organization of an information system, i.e. its synergetics.

The specificity of the existence of consciousness in terms of the information environment (information field – in the terminology of information synergetics), in which the formation of information objects is not deterministic, but occurs by trial and error (as in natural systems), makes the theoretical justification of the synergetics of artificial cognitive systems optional for its practical implementation (which does not call into question the academic significance such justification). In particular, an artificial cognitive system in the form of a deep machine learning system implemented on the basis of an artificial neural network, in the learning process, will independently identify and generalize complex dependencies between input and output data in the form of appropriate communication coefficients between neurons.

The synergetic ability of an artificial cognitive system is determined not by data processing algorithms embedded in it (such algorithms are necessary, but they can be elementary themselves), but by resources, for example, the number of neurons used. Something similar takes place in nature with natural cognitive systems: in order to acquire the ability to intellectual activity, a sufficiently large brain with a large number of neurons, auxiliary and other cells is required. Humans have such a brain, animals have a significantly smaller brain and, accordingly, lower ability of animals to think.

 

Synergetics of consciousness

Within the framework of synergetics, a system is called "self-organizing if it acquires some kind of spatial, temporal or functional structure without specific external influence [21, p. 34]. This is due to the "specific coordination of individual parts of the system" [ibid., p. 52]. What is the reason for this agreement?

The phenomenon of matching open systems is quite common. It represents the implementation of the extreme principle, according to which the system behaves in such a way that some value characterizing its activity takes an extreme (minimum or maximum) possible value. Special cases of the extreme principle are the principle of least action, the principle of Le Chatelier–Brown, etc. Studies show the connection of the extreme principle with the law of overreaction of suprasystems, according to which "the stability of an open system is ensured by the overreaction of the suprasystem to the activity of this system" [13]. To consciousness, which is an open system with nonequilibrium stability, maintaining its stability at the expense of the resources of the suprasystem, the law of excessive reaction of the suprasystem is fully applicable.

The practical implementation of this agreement in the case of consciousness is a step-by-step self-organization of the information environment.  The process of formation and development of consciousness (in the form of an information environment with non-equilibrium stability) in a cognitive system can be represented as a two-level phenomenon. At the level of the physical carrier of consciousness, significant data are recorded (in the form of changes in its electromagnetic or other properties), fixing intermediate, differing states of consciousness. But it is implemented at the virtual level (in the form of an information process) thinking, which is the process of varying data from the carrier of consciousness and those born in the process of thinking (similar to operations with data in the RAM of a computer). If significant data is formed in the process of thinking, then they are fixed in the carrier of consciousness, transferring it to a new state.

The initiation of self-organization of the information environment is carried out by the process of thinking, which, due to the non-equilibrium stability of consciousness, does not stop. The process of thinking can be interpreted as the process of varying the parameters of the system in search of suitable ones for advanced modeling of reality. Activation of thinking occurs as a result of deviations of the system from equilibrium due to permanent fluctuations in its state. In some cases, these fluctuations lead to changes in the system, which are fixed, which corresponds to the transition of the system to a new state. I. Prigozhin associated such transitions with stability disturbances due to fluctuations near bifurcation points [24, p. 115]

 

Conclusions

Let's summarize the research conducted in the article:

1. The concept of artificial cognitive systems should be significantly expanded and include among cognitive systems any multilevel systems that perform the functions of recognizing and remembering information, making decisions, storing, explaining, understanding and producing new knowledge. The defining property of artificial cognitive systems is their ability to make decisions. An important special case of an artificial cognitive system is creative artificial intelligence.

2. For a complex artificial cognitive system, the only way to realize consciousness is to "select" a solution based on varying parameters, i.e. repeatedly solving a direct problem until the result corresponds to the required one.

3. For "living" systems, i.e., open dynamic systems, the state of which is determined by the processes occurring in them, there is an implementation of mechanisms of stable disequilibrium. Consciousness is a "living" system determined by the mental processes taking place in it.

4. The preferred implementation of an artificial cognitive system to ensure decentralization and speed is a system based on the use of artificial neural networks, organized according to an actor or reactor model.

5. Currently, there is significant theoretical groundwork in the field of synergetic interpretation of the self-organization of complex nonlinear nonequilibrium systems, including information and cognitive ones. The further development of theoretical knowledge in this field is of great academic interest. At the same time, the effectiveness of the practical implementation of artificial cognitive systems is determined not by the complexity of the algorithms used in them, but by the available resources (artificial neurons and auxiliary elements).

6. The phenomenon of coordination of individual parts of the system (including consciousness) is a special case of the implementation of an extreme principle, in turn, due to the action of the law of excessive reaction of suprasystems, regulating the existence of dissipative systems at the expense of the resources of the suprasystem.

References
1Philosophy: Encyclopedic Dictionary. (2004). Edited by A.A. Ivin. Moscow: Gardariki.
2. Nirenburg, S. (2015). Cognitive Systems as Explanatory Artificial Intelligence. In: Ga-la, N., Rapp, R., Bel-Enguix, G. (Eds). Language Production, Cognition, and the Lexicon. Text, Speech and Language Technology, 48, 37-49.
3. Gribkov, A.A., & Zelenskij, A.A. (2023). General theory of systems and creative artificial intelligence. Philosophy and Culture, 11, 32-44.
4. Gribkov, A.A., & Zelenskij, A.A. (2023). Definition of consciousness, self-consciousness and subjectness within the information concept. Philosophy and Culture, 12, 1-14.
5. Makarov, A.D., & Shajdarov, O.V. (2021). Algorithm of the direct and inverse problem solution in the context of the comparative analysis and synthesis methods. Regional aspects of management, economy and law of the North-West Federal District of Russia, 52, 61-75.
6. Sergin, V.Y. (2010). Consciousness and thinking: nature and neural mechanisms. Open Education, 6, 119-132.
7 Bauer, E.S. (1935). Theoretical Biology. M. – L.: VIEM.
8. Glensdorf, P., & Prigozhin I. (1973). Thermodynamic Theory of Structure, Stability and Fluctuations. Moscow: Mir.
9. Chuprov, S.V. (2006). Unstable equilibrium and stable disequilibrium of the economic system. From N.D. Kondratiev's views to the modern paradigm. ENSR, 3(34), 112-120.
10. Klimontovich, Y.L. (1996). Criteria of the relative degree of ordering of the open systems. UFN [UVN], 11, 1231-1243.
11. Pross, A., & Pascal, R. (2013). The origin of life: what we know, what we can know and what we will never know. Open Biology, 3, 120190.
12. Pascal, R., Pross, A., & Sutherland, J.D. (2013). Towards an evolutionary theory of the origin of life based on kinetics and thermodynamics. Open Biology, 11, 30156.
13. Gribkov, A.A. (2023). The Law of Overreaction of Supersystems and Extreme Principle. Society: Philosophy, History, Culture, 10, 25-30.
14. Zelenskij, A.A., & Gribkov, A.A. (2024). Actor modeling of the real-time cognitive systems: ontological substantiation and program-mathematical realization. Philosophical Thought, 1, 1-12.
15. Burgin, M. (2017). Systems, Actors and Agents: Operation in a multicomponent environment. Retrieved from arXiv:1711.08319.
16. Rinaldi, L., Torquati, M., Mencagli, G., Danelutto, M., & Menga, T. (2019). Accelerating Actor-based Applications with Parallel Patterns. 27th Euromicro International Conference on Parallel, Distributed and Network-Based Processing, 140-147.
17. Shah, V., & Vaz Salles, M.A. (2018). Reactors: A case for predictable, virtualized actor database systems. International Conference on Management of Data, 259-274.
18. Lohstroh, M., Menard S., Bateni S., & Lee E. (2021). Toward a Lingua Franca for Deterministic Concurrent Systems. ACM Transactions on Embedded Computing Systems, 4, 1-27.
19. Ushakov, D.V., & Valueva, E.A. (2022). Challenges of artificial intelligence for psychology / Human being and artificial intelligence systems. Edited by V.A. Lektorsky. SPb.: Publishing house "Legal Center".
20. Haken, G. (2001). Principles of the Brain: A Synergetic Approach to Brain Activity, Behavior, and Cognitive Activity], 243-314 Moscow: PER SE.
21. Haken, G. (2014). Information and Self-Organization: Macroscopic Approach to Complex Systems, 36-37. Moscow: URSS: LENAND.
22. Haken, G. (1985). Synergetics: Hierarchies of Instabilities in Self-Organizing Systems and Devices, 36-38. Moscow: Mir. 
23. Tsvetkov, V.Y. (2021). Information Synergetics. Educational Resources and Technologies, 2(35), 72-78.
24. Prigozhin, I. (1985). From Existing to Emerging: Time and complexity in physical sciences. Moscow: Nauka.

Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

The reviewed article is devoted to the analysis of one of the components of the problem of creating artificial cognitive systems. According to the results of the study, the author proposes to expand this concept to include "any multi-level systems that perform the functions of recognizing and remembering information, making decisions, storing, explaining, understanding and producing new knowledge." At the same time, the author also gives a very broad understanding of "consciousness", presenting it as "an information environment in which an expanded model of reality is implemented" (it seems, however, that this working definition does not reflect "subjectivity", so important for understanding the activity of "living consciousness"). In his opinion, an artificial cognitive system is able to implement actions similar to "living consciousness" based on the "selection" of a solution carried out under conditions of "varying parameters", "multiple solutions to a direct task until the result corresponds to the required one." The reviewer considers it possible to present these provisions to the reader, realizing at the same time that they are debatable, and it is hardly possible to say with confidence that the reviewed article provides sufficient justification for their adoption. Nevertheless, the content of the article may well be considered as a basis for conducting a discussion on the stated topic. It seems somewhat strange that the relatively small "body of the article" serves as the basis for the author to formulate very numerous and "strong" (meaningfully loaded) conclusions. Upon deeper acquaintance with the text, it seems that each of these two parts of it "lives its own life", in any case, it is difficult to agree that all the proposed conclusions are sufficiently justified by the previous presentation. It seems that in the time remaining before publication, the author could either expand and concretize the main part of the text, or shorten or combine conclusions similar in meaning in order to avoid the impression of dissonance between the two parts of the narrative. In any case, the first provision of the conclusions should certainly be removed, this is a banal statement that does not need scientific justification. The third provision repeats the definition to which the author refers in the main text, so that it can also be removed without prejudice to the content of the article. Finally, the ninth position seems to be very "speculative" in nature, such statements can hardly be proved at all within the boundaries of scientific discourse. There are also some "technical" errors in the text, mainly of a stylistic nature: "... which is capable of ... is the field of information technology, an important component of which ...", "as a result, consciousness is not something real, although it is formed as a result ...", "... is defined quite simplistically" ("sufficiency" cannot be attributed to negative characteristics, and "simplification" is one of such characteristics), etc. Despite the comments made, which the author can take into account in a working order, I consider it possible to recommend publishing an article in a scientific journal.