Translate this page:
Please select your language to translate the article


You can just close the window to don't translate
Library
Your profile

Back to contents

Philosophical Thought
Reference:

Formulation of the problem and definition of approaches to building semantic knowledge models for artificial intelligence.

Gribkov Andrei Armovich

ORCID: 0000-0002-9734-105X

Doctor of Technical Science

Leading researcher; Scientific and production complex 'Technological Center'

124498, Russia, Moscow, Shokin square, 1, building 7

andarmo@yandex.ru
Other publications by this author
 

 
Zelenskii Aleksandr Aleksandrovich

ORCID: 0000-0002-3464-538X

PhD in Technical Science

Leading researcher; Scientific and production complex 'Technological Center'

124498, Russia, Moscow, Shokin square, 1, building 7

zelenskyaa@gmail.com
Other publications by this author
 

 

DOI:

10.25136/2409-8728.2025.5.74407

EDN:

GHJTVU

Received:

08-05-2025


Published:

15-05-2025


Abstract: The article examines the issues related to the creation of semantic models of knowledge that can be used to endow artificial intelligence systems with the ability to understand the meaning of text in natural or any other language. Possible means for constructing semantic models of knowledge include the mechanism of multi-system integration of knowledge developed by the authors earlier, formal ontologies, and techniques of understanding meaning that have emerged within the framework of philological hermeneutics. Significant components of the presented study include an examination of the currently used language models of artificial intelligence, a new approach to the conceptualization of knowledge through its generalization in the form of open models, an assessment of the genesis and prospects of teleological and axiological interpretations of meaning for natural and artificial cognitive systems. The methodological basis of the presented study consists of the authors’ developments in the field of systems analysis, well-known analytical methods adopted within hermeneutics, structuralism, classical epistemology, formal ontology theory, and linguistic and language modeling. The scientific novelty of this research lies in the determination of the necessary tools for creating semantic models that generalize knowledge. The mentioned tools include: multi-system integration of knowledge based on the integration of the subject of cognition into multiple systems with subsequent generalization of the patterns identified in these systems and their translation for solving tasks of understanding and creativity; formal ontologies that implement the description of knowledge from a specific domain in the form of conceptual schemes, taking into account existing rules and relationships between elements, allowing automatic extraction of knowledge; and a wide variety of hermeneutic techniques for understanding meanings. Objective limitations of use for artificial cognitive systems that lack subjectivity and value prioritization in understanding meanings are noted. Some limitations in the use for artificial cognitive systems are also found in hermeneutic techniques for understanding the meaning of text. This is related to the impossibility of full reflection without feelings, emotions, and desires generated by needs that also initiate subjectivity.


Keywords:

semantic model, knowledge, artificial intelligence, cognitive system, multi-system integration of knowledge, patterns, formal ontologies, value prioritization, needs, subjectivity

This article is automatically translated.

Introduction

Artificial intelligence systems have steadily entered our daily lives in the last decade. The possibilities of modern publicly available implementations of artificial intelligence are expanding every year and already include text recognition, human speech, text generation on any given topic, writing music, etc.

A more detailed and rigorous assessment of existing versions of artificial intelligence forms a less idyllic picture. Despite significant achievements, artificial intelligence remains extremely limited, not qualitatively different from the machine learning systems that form its basis.

The information base of the implemented artificial intelligence systems are various language models. The most well–known of the language models are NLP (natural language processing), LLP (large language model) and LCP (large concept models), with LLP and LCP models using NLP models as one of the basic structural elements. NPL models, in turn, include linguistic models [1], which represent the practical implementation of the ideas of structuralism [2, pp. 72-83].

NLP models [3] serve the purpose of teaching machines to read, understand, interpret, and respond to human language. The main NLP tools are sentence syntactic analysis, semantic text analysis, and sentiment analysis algorithms that allow you to evaluate emotions and opinions expressed in a text.

LLM models [4] serve to understand and generate text. The main tools of LLM models are the breakdown of text into token elements (words and word elements), the representation of tokens in the form of a description of their semantic information and defining relationships with other tokens, the analysis of the relevance and significance of words in relation to each other, etc. The formation of the LLM model involves mandatory pre-training of artificial intelligence (AI) in the form of teaching grammar and fact processing methods, as well as subsequent fine‑tuning with the involvement of human experts who supplement AI knowledge in certain segments where there are gaps in knowledge, errors in interpretation or "hallucinations" [5].

LCM [6] is a type of language model that processes language at the conceptual level rather than analyzing individual words. The LCM model interprets semantic representations that correspond to whole sentences or coherent ideas, which allows you to take into account the broader meaning of the language, and not just the lexical constructions of sentences.

A common disadvantage of all language models is their limited linguistic discourse – artificial intelligence (machine learning system), built on the basis of NLP (MonkeyLearn, MindMeld, Amazon Comprehend, GPT-3, GPT-4, etc.), LLM (ChatGPT, Gemini, etc.) or LCM (Meta[1]), generates The system of knowledge is based solely on the interpretation of texts and does not involve tools of interrelation with the real world. The next step in the development of artificial intelligence systems, according to most experts in the field of AI, will be the formation of semantic models that will inevitably rely on language models, but will formalize the relationship between the linguistic description of objects and processes and their meaning.

In the context of defining approaches to the construction of semantic models, several key tasks are formulated, which this article is devoted to: the disclosure of the concept of "meaning" and the definition of formal means of its identification and description; the study of the formation of meanings using the example of human consciousness and the description of the identified mechanisms within the framework of a systematic approach; consideration of the commonality of mechanisms of comprehension and creativity and the integration of the mechanism of multisystem integration integration of knowledge described by the authors into a systematic semantic description of objects and processes; assessment of the possibility of using ontologies and hermeneutic techniques of understanding meaning for semantic modeling.

Understanding knowledge based on open models

The answer to the question "what is meaning?" is not as trivial as it may seem at first glance. According to the authors, it is adequate to define meaning as the essential content of a particular expression of a language (sign, word, sentence, text). That is, the concept of meaning is tied to language. Therefore, it is logical to focus on achieving an understanding of meaning based on the use of language models. On the other hand, the concept of "meaning" is related to cognition – in the process of cognition, the understanding of the meaning of knowledge is carried out.

What is required in order to understand the meaning of knowledge? In order to answer this question, it is necessary to understand: what is knowledge? Knowledge is not the truth, not a reliable representation of reality, but a set of models, each of which is local (limited in scope of description) and limited in accuracy of correspondence to the object of knowledge.

Is all knowledge meaningful? Absolutely not. It is likely that for the main part of the objects and processes in the knowledge system, the internal content is unknown or only partially known. The boundary between meaningful and non-meaningful knowledge runs through the division of models representing this knowledge into open and closed ones. Previously, these models were defined [7, pp. 18-20]: "By "closed" we will mean models formed on the basis of empirical knowledge in a limited area of knowledge (for example, a certain range of changes in the parameter under study), and inconsistent with reality outside this area. By "open" – models that turn out to be applicable outside the field of cognition, on the basis of which the model was created."

In order for a model to be open, it must not only formally (in particular, quantitatively) correspond to an object or process, but must be reliable. Previous studies have made it possible to formulate five rules for the formation of reliable knowledge [7, pp. 195-201]: consistency, ontology, coherence, isomorphism and completeness. Of these rules, the understanding of knowledge is most facilitated by identifying its compliance with the rules of ontology and isomorphism. According to the rule of ontology, "the formation of a reliable element of the model of the universe requires ensuring its compliance with a priori knowledge, or determining the evolutionary relationships of this element with less complex elements for which this correspondence is ensured," and according to the rule of isomorphism, "the defined element or set of elements of the model of the universe must correspond to known patterns." Let us recall that patterns are called patterns of forms and relationships of objects that are repeated at various levels of the organization of the universe, in various subject areas.

The rule of ontology determines the correspondence of the models formed within the framework of knowledge systems to reality. If such compliance is ensured, then the model will obviously be open due to its reliability. In practice, ensuring the ontology of knowledge is in most cases unattainable: the logical chains that need to be built from a priori knowledge to specific knowledge related to complex objects and processes turn out to be too long to track and justify.

The isomorphism rule is more accessible to use. There is no need to prove or argue anything - the existence of isomorphism of forms and laws in the universe is an indisputable empirical fact, a practical realization of the integrity of the universe. The identification of patterns of forms and laws in the generated model is a confirmation of the correctness of understanding the object of cognition. Through these revealed patterns, the meaning of knowledge is revealed, which consists in integrating a cognizable object (in the form of a model) into the world system with the definition of its place and role. Patterns determine the essence (internal content) of a model that summarizes knowledge, and the essence, in turn, indicates the place that the model occupies in the world system.

So, in order to comprehend knowledge, it must be presented in the form of an open model based on patterns of forms and laws, as well as (to the extent possible) verified in relation to existing a priori knowledge. The latter requirement is largely satisfied when using high-level universal patterns for describing objects [7, pp. 211-217], determined based on a priori knowledge (basic laws of being).

Multi-system integration of knowledge for understanding and creativity

Previous research has made it possible to interpret creativity as the implementation of the idea of the integrity of the world through, on the one hand, borrowing funds for the realization of creativity from the forms and laws of the surrounding world, and, on the other hand, using creativity as a tool for creating a holistic view of the world. As a result, the representation of the integrity of the world is an imperative (a requirement for the form and content) of creativity [8].

Understanding knowledge is a creative process that results in the integration of knowledge into the idea of the integrity of the world. As we have already found out, one of the key forms of this integration is to identify patterns of forms and laws involved in models that summarize knowledge about objects and processes in the real world.

Comprehension of existing (previously created) knowledge and creativity (creation of new knowledge) are based on a common knowledge integration mechanism that ensures the integrity of the representation of the world, called "multisystem knowledge integration" [9].

The mechanism of multisystem integration of knowledge, inherent in humans by nature, is necessary for intellectual activity. The functioning of this mechanism is based on the integration of human consciousness into many systems to which a person belongs, is connected or interacts. Such systems include the physical world, biological and ecological systems, society, including the system of economic relations, intellectual spheres related to human culture and spiritual life, etc. In each of the systems into which human consciousness is integrated, knowledge is collected, systematized and generalized in the form of patterns of forms and laws. These generalized patterns (both high-level patterns, logically correlated with a priori knowledge, and secondary ones, the genesis of which is not deterministic) are used by a person in the process of intellectual activity for creativity (by translating patterns from one system to another) and understanding knowledge (by correlating the patterns identified in their generalizing models with patterns in systems, in particular which integrated human consciousness).

The mechanism of multisystem integration of knowledge is universal and applicable to any cognitive systems endowed with (or endowed with) intelligence. The defining vector of the development of artificial intelligence systems is the implementation of a multi-system knowledge integration mechanism in them. Currently, this task has not been solved yet. One of the main components of solving this problem is the development of a formal theory of multisystem knowledge integration, which the authors are currently working on. This theory should lay the necessary foundation in the formalization of the representation of patterns, their identification, systematization and comparison. It is also necessary to define a set of necessary and sufficient tools for identifying objects and processes that need to be understood or compared with analogues (using the same patterns).

Teleological and axiological interpretations of meaning

Is "meaning" an absolute category that answers the question of the essence of the knowledge generalization model? Or maybe comprehension of knowledge presupposes awareness of its purpose and/or value? In order to answer these questions, we first state the relationship between the two designated interpretations of comprehension: teleological [10, pp. 45-52], which assumes a leading role in determining the meaning of knowledge and its purpose, and axiological [10, pp. 22-32], which assumes a leading role in determining the meaning of knowledge and its value. Within the framework of epistemological analysis, the teleological interpretation of meaning inevitably approaches the axiological one: understanding the purpose of a model integrating knowledge means its value qualification. As Aristotle wrote: "the good is the goal of all creation and movement" [11, p. 70]. The value qualification of knowledge is based on the definition of its usefulness ("good" in Aristotle's terminology), which in turn depends on the ability of this knowledge to satisfy needs. In the context of human civilization, we are talking about the needs of a person or society as a whole.

The perception of reality within the framework of human consciousness is refracted through the prism of the axiological interpretation of meanings. On the one hand, this is a negative phenomenon, since the world in the subject's mind does not correspond to reality, but, on the other hand, perception remains generally adequate, while simultaneously becoming more functional [12]. Focusing on the goal (maximizing value and, ultimately, the best satisfaction of human needs) stimulates the construction of generalized knowledge models that are increasingly functional and at the same time meaningful as part of a holistic perception of the world. This is the mechanism of meaning formation initiated by axiological interpretation: from a disparate knowledge environment, gradient-driven models of generalized knowledge are formed, the comprehension of which requires not the disclosure of their essence (internal content), but the qualification of value (usefulness) to meet people's needs. At the same time, as V. Frankl, the founder of logotherapy, wrote (from the other Greek. λόγος - meaning, reason): "Meaning must be found, but cannot be created ... Meaning not only must, but can be found, and in search of meaning a person guides his conscience. In short, conscience is an organ of meaning. It can be defined as the ability to discover the unique and unique meaning that lies in any situation" [13, pp. 37-38].

Any object or process, becoming an object of cognition, can be represented through a wide variety of models that summarize knowledge about it and integrate this knowledge into the general system of knowledge about the universe. The parameters and evaluation criteria underlying the model cannot be arbitrary (they are limited by the reliability of the generated model), but they can vary significantly. In particular, prioritizing the criterion of value (utility) does not contradict the requirement to ensure the reliability of the model.

The use of a value–based approach in the multisystem integration of knowledge, which entails the inevitable redefinition of generalized knowledge models, is natural for human consciousness, which is unable to fully get rid of the subjectivity of perception of the world, its place in it, forms and means of interaction with it. Whether such a distortion is necessary for artificial cognitive systems is a question that should probably be answered in the negative. This is due to the existence of an irremediable difference between artificial and natural cognitive systems, due to the presence of subjectivity in the latter (human or animal consciousness), initiated by needs. According to research [14], subjectivity is not necessary to endow artificial cognitive systems with the ability to perform intellectual activities. Without subjectivity and needs, an artificial cognitive system cannot enhance the functionality of its understanding of generalized knowledge models by prioritizing a value-based approach based on an understanding of the values inherent in humans and other natural cognitive systems.

Semantic models and formal ontologies

The effectiveness of semantic modeling for the intellectualization of artificial cognitive systems largely depends on the chosen format of representation of semantic models. One of the possible formats that have been actively developed in recent years are ontologies [15], which implement a formal description of knowledge from any subject area in the form of conceptual schemes, taking into account the existing rules and relationships between elements, allowing automatic knowledge extraction. A promising field of application of ontologies is the extraction of meaning from a text in a natural language [16].

Ontologies are constructed using [17]: instances (individuals) – low-level components subject to classification; concepts (classes) generalizing instances or other concepts (classes); attributes characterizing instances or concepts (classes); relationships between instances determined by their attributes. Taxonomies are built within the structure of ontologies – categorized words arranged hierarchically. Currently, there are many formal languages used to encode ontologies: CASL (Common Algebraic Specification Language), CL (Common Logic), DOGMA (Developing Ontology-Based Methods and Applications), Semantic Application Design Language (SADL), OWL (Web Ontology Language), KIF (Knowledge Interchange Format), ACL (Agent Communications Language), etc.

The representation of a subject area through ontologies involves a description of all its aspects, including characteristic objects and subjects of research, applied scientific methods, projects carried out and results obtained [18]. A necessary stage in the formation of ontologies is the construction of their terminological basis [19]. After appropriate systematization and generalization, a thesaurus of terms for this subject area is formed, which becomes the linguistic basis of the emerging paradigm. The resulting representation of the subject area consists of objects and concepts ordered by classifications and taxonomy, properties (attributes) and relationships (relationships), which are described according to the paradigm established for this subject area.

How do they relate to each other, on the one hand, an approach to understanding knowledge based on multisystem knowledge integration, which is based on identifying patterns of forms and relationships of generalized knowledge models in limited subject areas, and, on the other hand, an approach to understanding knowledge based on its representation through ontologies that formalize the properties and relationships of knowledge elements within the framework of an established paradigm?

Comprehension through formal ontologies serves to identify knowledge models, to "decipher" knowledge presented in natural language or in another form that does not have the necessary formalization for direct knowledge extraction. Understanding knowledge based on multisystem integration of knowledge serves to determine the place and role of the knowledge model in the system of knowledge about the universe, as well as to identify universal means (patterns of forms and laws) that define the model of knowledge about the subject area. These two approaches to understanding knowledge are not alternative, but complementary. In addition, mutual penetration is inevitable between these approaches – the identification of models that generalize knowledge is one of the tasks solved in the multisystem integration of knowledge, and the construction of a terminological base of ontologies requires the use of lexico-semantic patterns, which can later be included in collections of secondary (applied) patterns used to comprehend knowledge.

Understanding in Philological hermeneutics

The activity of hermeneutics is realized in the same field of subjective cognition as the teleological or axiological interpretation of comprehension. According to G.I. Bogin, "hermeneutics is an activity, not a science, but scientific developments are possible and even necessary in hermeneutics" [20]. The subjectivity of hermeneutics is an inevitable consequence of the choice of understanding, as opposed to the analysis of objective structures of knowledge.

Understanding in hermeneutics is carried out through the reflection of the subject. Even if, like M. Heidegger [21, pp. 264-370], we define understanding through the existence of Dasein, then in this case the participation of the subject remains – through it the reflection of being is transmitted, which is understanding. At the same time, as H.-G. Gadamer asserts: "Being, which can be understood, is a language ... we are talking not only about the language of art, but also about the language of nature, and in general about a certain language in which things are spoken" [22, pp. 548-549].

A similar definition can be given for philological hermeneutics. G.I. Bogin writes [20]: "The subject of philological hermeneutics is understanding – discretion and mastering of the ideal, presented in textual forms. The texts can be in natural languages or in the "languages" of other arts."

Philological hermeneutics offers a wide variety of tools and techniques for understanding the text. G.I. Bogin identifies the following main groups, combining 105 techniques [20]: techniques of discretion and construction of meanings (creating a focus of reflection, stretching or categorizing meanings, understanding the scheme of action through semantic (narrative) threads, building up and categorizing predications, regulating expectations of meanings, completion of reflections, actualization to connect new knowledge with what is understood, etc.), the use of a reflex bridge (metaphorization, phonetic, intonational, grammatical actualizations, etc., binary juxtaposition of text-forming means, references and intertextuality, irony, symmetry, etc.), techniques for gluing mixed constructs (meaning and meaning, meaning and concept, content and meaning, association and reflection, etc.), interpretive techniques (restoration of meaning by meaning, discretion and definition of alternative meaning, self-determination in the world of perceived meanings or in an alternative semantic world, etc.), techniques of transition and substitution (from meaning to meaning, from meaning to meaning, from meaning to concept, from concept to meaning, etc.), exit techniques (to understanding meaning, to discretion and awareness of beauty or artistry, to experiencing or harmony, to determining truth, to formulating an idea, etc.). The degree of formalization of techniques varies, but all of them are of undoubted interest for solving the problem of understanding knowledge presented in text form (or, in hermeneutical terms, for interpreting and understanding the text).

The potential of hermeneutics in solving the problem of understanding for artificial intelligence systems is still difficult to assess. To a large extent, this is due to the lack of subjectivity in artificial cognitive systems, which makes it impossible to fully reflect, which includes, along with paying attention to the content and functions of one's own consciousness, also analyzing feelings, emotions and desires. Meanwhile, reflection is a key element of most of the techniques of text understanding formulated within the framework of philological hermeneutics. There are few works devoted to the application of hermeneutic philosophy approaches to understanding textual knowledge in artificial intelligence systems and they do not provide a complete picture of the existing prospects in this field [23,24].

Relatively recently, a new term has entered scientific usage – digital hermeneutics [25] – the interpretation of texts, digital objects and technologies using a computer or even artificial intelligence [26]. In this case, however, the problem is solved, the reverse of the one that arises when the cognitive system is endowed with the ability to understand the meaning of models that generalize knowledge (for example, in the form of text). Digital hermeneutics is a case of expanding the possibilities of applying hermeneutics through the use of modern technologies, rather than using hermeneutics to enhance artificial intelligence (its ability to understand).

Conclusions

Let's summarize the research conducted in the article:

1. In recent years, there has been an active development of artificial intelligence systems based on the use of various language models. Despite significant achievements, it should be noted that there are obstacles to further development related to the limitations of language models. It is necessary to move to semantic models that operate not with words or sentences, but with meanings.

2. A possible approach to defining the concept of "meaning" is to represent knowledge in the form of generalizing models and draw a boundary between models based on their openness or closeness. In this case, meaningful models will only be open ones that are applicable outside the field of knowledge on the basis of which they were created.

3. The reliability of open models that generalize knowledge is ensured by their isomorphism, i.e. the identification of widespread patterns in their forms and laws that define them, indicating the integration of models into the system of representations of the universe.

4. A key tool for understanding existing knowledge and creating new ones (through creativity) is multi-system integration of knowledge, which allows, based on the combination of knowledge from different subject areas, to identify patterns of forms and laws that can later be translated between subject areas, allowing you to create and solve intellectual problems.

5. The meaning of knowledge, understood by a person (or another natural cognitive system with subjectivity), has pronounced features of subjectivity that make teleological and axiological interpretation of meanings inevitable. The resulting prioritization of value (usefulness) can enhance the functionality of perception. For artificial cognitive systems that do not possess subjectivity and, consequently, are not capable of categorizing interpretations of meanings according to the criterion of value (usefulness), teleological or axiological interpretation of meanings is impossible, and, consequently, semantic models are devoid of subjectivity.

6. Ontologies are a promising tool for semantic modeling, which implement a formal description of knowledge from any subject area in the form of conceptual schemes, taking into account existing rules and relationships between elements, allowing automatic knowledge extraction. Formal ontologies and multisystem integration of knowledge are approaches to understanding knowledge that complement each other and are mutually penetrating, and are most accurately and fully defined together.

7. Hermeneutics, an activity aimed at understanding texts in natural and any other languages, has accumulated a significant arsenal of applied tools – techniques for understanding texts. These techniques are obviously effective when used by humans. The possibility of their use for artificial cognitive systems is a matter that requires additional research. It can be assumed that due to the lack of subjectivity, the approaches of hermeneutic philosophy, in which reflection occupies a central place, cannot be fully applied.

[1] Banned in the Russian Federation

References
1. Tulupova, T. A., & Pavlenko, S. A. (2021). Linguistic models-formal methods in linguistics. Modern Innovations, (2), 44-46. EDN: CSXQNP.
2. Ricoeur, P. (2008). Conflict of Interpretations: Essays in Hermeneutics (I. S. Vdovina, Trans.). Academic Project.
3. Khurana, D., Koli, A., Khatter, K., & Singh, S. (2022). Natural language processing: State of the art, current trends and challenges. Multimedia Tools and Applications, 82(3), 3713-3744. https://doi.org/10.1007/s11042-022-13428-4. EDN: OMUYAR.
4. Minaee, S., Mikolov, T., Nikzad, N., Chenaghlu, M., Socher, R., Amatriain, X., & Gao, J. (2025). Large language models: A survey. arXiv. https://doi.org/10.48550/arXiv.2402.06196.
5. Huang, L., Yu, W., Ma, W., Zhong, W., Feng, Z., Wang, H., Chen, Q., Peng, W., Feng, X., Qin, B., & Liu, T. (2024). A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions. ACM Transactions on Information Systems, 43(2), Article No. 42, 1-55. https://doi.org/10.1145/3703155. EDN: FHGSXF.
6. Large Concept Models: Language Modeling in a Sentence Representation Space. (2024). LCM team, Loïc Barrault, Paul-Ambroise Duquenne, Maha Elbayad et al. arXiv. https://doi.org/10.48550/arXiv.2412.08821.
7. Gribkov, A. A. (2024). Empirical-metaphysical general theory of systems: Monograph. Publisher not provided.
8. Gribkov, A.A. (2024). Creativity as an implementation of the idea of the integrity of the world. Philosophical Thought, 3, 44–53. . https://doi.org/10.25136/2409-8728.2024.3.70034
9. Gribkov, A.A., Zelenskii, A.A. (2025). Intelligent cognitive system with multi-system knowledge integration: feasibility and approaches to formation. Philosophical Thought, 2, 1–11. . https://doi.org/10.25136/2409-8728.2025.2.73395
10. Pivoyev, V. M. (2004). Philosophy of Meaning, or Teleology. PetrSU.
11. Aristotle. (1976). Works in Four Volumes (V. F. Asmus, Ed.). Thought.
12. Dorofeev, Y. V. (2019). On functional foundations of perception and understanding of text. Pedagogical Image, 13(3), 321-332. https://doi.org/10.32343/2409-5052-2019-13-3-321-332.
13. Frankl, V. (1990). Man's Search for Meaning. Progress.
14. Gribkov, A.A., Zelenskii, A.A. (2023). General Systems Theory and Creative Artificial Intelligence. Philosophy and Culture, 11, 32–44. . https://doi.org/10.7256/2454-0757.2023.11.68986
15. Smirnov, S. V. (2013). Ontologies as meaningful models. Ontology of Design: Scientific Journal, (2), 12-19. EDN: QICWND.
16. Boguslavsky, I. M., Dikonov, V. G., & Timoshenko, S. P. (2013). Ontology for supporting tasks of extracting meaning from natural language text. In Information Technologies and Systems: Proceedings of the 35th Conference of Young Scientists and Specialists (pp. 152-160).
17. Smith, B. (1998). Basic concepts of formal ontology. In N. Guarino (Ed.), Formal Ontology in Information Systems (pp. 19-28). IOS Press.
18. Zagorulko, Y. A., Sidorova, E. A., Zagorulko, G. B., Akhmadeeva, I. R., & Sery, A. S. (2021). Automation of ontology development for scientific subject areas based on ontology design patterns. Ontology of Design, 11(4), 500-520. https://doi.org/10.18287/2223-9537-2021-11-4-500-520. EDN: EEHSIA.
19. Kononenko, I. S., & Sidorova, E. A. (2022). Methodology for developing lexical-semantic patterns for extracting terminology from scientific subject areas. System Informatics, (20), 25-46.
20. Bogin, G. I. (2001). Gaining the Ability to Understand: Introduction to Philological Hermeneutics. Publisher not provided.
21. Heidegger, M. (2001). Basic Problems of Phenomenology (A. G. Chernyakov, Trans.). Higher Religious-Philosophical School.
22. Gadamer, H.-G. (1988). Truth and Method: Foundations of Philosophical Hermeneutics (B. V. Bessonov, Ed. & Intro.). Progress.
23. Nesterov, A. Y. (2008). The problem of understanding and artificial intelligence. Open Education, (1), 58-63. EDN: KUUNLZ.
24. Liu, T., & Mitcham, C. (2024). Toward practical hermeneutics of fourth paradigm AI for science. Technology and Language, 5(1), 89-105. https://doi.org/10.48417/technolang.2024.01.07. EDN: KBKBRS.
25. Buralkin, M. Y., & Chernenskaya, S. V. (2019). Digital hermeneutics. In Communicative Strategies of the Information Society: Proceedings of the XI International Scientific and Theoretical Conference (pp. 43-45). St. Petersburg Polytechnic University of Peter the Great. EDN: TJVUMH.
26. Chemezova, E.R. (2024). Modern Technologies and Hermeneutical Analysis of Poetic Text. Pedagogy and education, 1, 57–66. . https://doi.org/10.7256/2454-0676.2024.1.39927

Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

In the peer-reviewed article "Problem statement and definition of approaches to the construction of semantic knowledge models for artificial intelligence", the subject of research is the problem of understanding knowledge in artificial intelligence (AI) systems. The main focus is overcoming the limitations of existing language models: Natural Language Processing (NLP), Large language Models (LLM) and Large conceptual models (LCP), which have a number of significant disadvantages when working with information. The paper uses the methods of 1) analysis of modern language models (NLP, LLM, LCM), which allowed to reveal their limitations; 2) classification of knowledge models into open and closed; 3) integration of philosophical concepts (teleology, axiology) and hermeneutics. The study includes a review of scientific publications by Russian and foreign authors in the field of AI, philosophy of cognition, and linguistics. The topic of the article is extremely relevant in the context of the rapid growth of AI capabilities and the need to overcome the limitations of language models, where the key problem remains the inability of systems to operate with meanings, not just textual constructions. The article touches upon the fundamental issues necessary for the creation of intelligent systems of a new generation. The creation of semantic models, according to the author(s), will significantly expand the functionality of AI, providing a link between language and the real environment. In general, the scientific novelty of the work is determined by the introduction of new concepts of understanding knowledge into scientific circulation. In particular, it should be noted that knowledge models are classified into open (universal, pattern-based) and closed (limited by empirical data). Special attention should be paid to the statement according to which the mechanism of understanding existing knowledge and forming new knowledge through the creative process is the multi-system integration of knowledge, which makes it possible, based on the combination of knowledge from different subject areas, to identify patterns of forms and laws that can later be translated between subject areas, allowing you to create and solve intellectual tasks. An analysis of the possibilities of hermeneutics and ontologies for the formalization of semantic models led to the conclusion that both techniques have certain advantages and disadvantages. Hermeneutics works effectively for people to understand and interpret texts, while ontologies provide the formalization of knowledge and the automation of its processing. The conclusion can be considered somewhat controversial that for artificial cognitive systems that do not possess subjectivity and, accordingly, are not capable of categorizing interpretations of meanings according to the criterion of value (usefulness), teleological or axiological interpretation of meanings is impossible, and, consequently, semantic models are devoid of subjectivity Structurally, the work consists of an introduction, the main section devoted to theoretical aspects of comprehension knowledge, and conclusions. The material is presented consistently and logically, although in some places it may seem overloaded with technical details from the standpoint of philosophical understanding of the problem. The author's style is characterized by clarity of presentation and meets academic standards. Terms with concise formulations are used (for example, multisystem knowledge integration), facilitating the perception of complex concepts. However, some fragments require careful reading for deep understanding. The work includes a bibliography consisting of 26 sources covering various aspects of the topic: language models (NLP, LLM, LCM), epistemology, hermeneutics, teleology, axiology, phenomenology, philosophy of language. The list is presented in the order of citation, complete bibliographic descriptions of the sources are indicated. The author(s) actively refer to existing theories, which strengthens their argument. The material will be in demand among researchers in the fields of philosophy of cognition, artificial intelligence, and cognitive technologies. The text may be too specialized for a wide audience, but for the target audience it is a valuable source of ideas in the field of AI development. Thus, the article "Problem statement and definition of approaches to the construction of semantic models of knowledge for artificial intelligence" has scientific and theoretical significance. The work can be published.
We use cookies to make your experience of our websites better. By using and further navigating this website you accept this. Accept and Close