Translate this page:
Please select your language to translate the article


You can just close the window to don't translate
Library
Your profile

Back to contents

Historical informatics
Reference:

Historian in the world of neural networks: the second wave of artificial intelligence technology application.

Borodkin Leonid

Doctor of History

Corresponding Member of the Russian Academy of Sciences, Professor, Head of the Department for Historical Information Science at Lomonosov Moscow State University (MSU)

119991, Russia, Moskva oblast', g. Moscow, ul. Lomonosovskii Prospekt, 27-4

borodkin-izh@mail.ru
Other publications by this author
 

 

DOI:

10.7256/2585-7797.2025.1.74100

EDN:

QXYMHF

Received:

08-04-2025


Published:

15-04-2025


Abstract: Over the last decade, artificial intelligence (AI) technologies have become one of the most sought-after areas of scientific and technological development. This process has also impacted historical science, where the first research in this area began in the 1980s (the so-called first wave) – both in our country and abroad. Then came the "AI winter," and at the beginning of the 2010s, the "second wave" of AI emerged. The subject of this article is the new opportunities for applying AI in history and the new problems arising in this process today, when the main focus of AI has shifted to artificial neural networks, machine learning (including deep learning), generative neural networks, large language models, etc. Based on the experience of historians applying AI, the article proposes the following seven directions for such research: recognition of handwritten and old printed texts, their transcription; attribution and dating of texts using AI; typological classification and clustering of data from statistical sources (particularly using fuzzy logic); source criticism tasks, data completion and enrichment, and reconstruction using AI; intelligent search for relevant information, utilizing generative neural networks for this purpose; using generative networks for text processing and analysis; and the use of AI in archives, museums, and other institutions that store cultural heritage. An analysis of the discussion of similar issues organized by the leading American historical journal AHR has been conducted. These are conceptual questions regarding the interaction between humans and machines ("historian in the world of artificial neural networks"), the possibilities for historians to use machine learning technologies (particularly deep learning), various AI tools in historical research, as well as the evolution of AI in the 21st century. Practical aspects were also touched upon, such as the experience of recognizing newspaper texts from past centuries using AI. In conclusion, the article addresses the problems related to the use of generative neural networks by historians.


Keywords:

Artificial Intelligence, artificial neural networks, machine learning, deep learning, generative neural networks, image recognition, text atribution, algorythms, data, historical source

This article is automatically translated.

One of the current directions in the methodological discussions among historians today is related to the discussion of the prospects and problems that have emerged during the understanding of the (still small) experience of historians in applying artificial intelligence methods and technologies. However, by the beginning of 2025, at least 50 articles by Russian authors have been published, presenting the results of testing these technologies in historical research. In foreign publications, there are noticeably more such publications. It is also worth mentioning the publications whose authors discuss new methodological and ethical problems arising in connection with the spread of generative artificial neural networks.

It should be noted that the first publications on the application of artificial intelligence (AI) in historical research date back to the late 1980s – early 1990s. During these years, historians mainly used expert systems, cognitive methods of historical text analysis, clustering algorithms with elements of learning (including using fuzzy logic), and other artificial intelligence methods. Already in 1987, a section on "Artificial Intelligence and Expert Systems" was working at the II International Conference of the Association "History and Computing" (AHC) in London [1], and the program of the V International Conference AHC (Montpellier, 1990) included a section on "Expert Systems" [2]; a similar section was in the program of the International Conference on the Application of Computers in the Humanities and Social Sciences, held in 1988 in Cologne [3]. By the end of the 1980s, we began to "monitor" publications by foreign historians who applied AI [4, pp. 4-8].

In domestic historical science, the first wave of using artificial intelligence methods also dates back to the 1980s, when cognitive text analysis of political figures from the past (for example, Bismarck [5, pp.149-172]) was tested, expert systems were developed [6, pp. 8-16], multidimensional historical-typological clustering using fuzzy set theory [7, pp. 391-408], OCR recognition of old printed texts [8, pp. 139-146], and more. The works mentioned here were presented at the meetings of the All-Union Seminar on the Application of Quantitative Methods in Historical Research during the 1980s, which were regularly held at the Historical Faculty of Moscow State University.

Then came the period of the "AI winter", and with the beginning of the second decade of the 21st century came the second wave, largely associated with breakthrough developments in computer technologies, big data, parallel computing, and so on. Within this second wave, the main direction of AI became artificial neural networks, machine learning (including deep learning), generative neural networks, large language models (LLM), and more. This process has also affected historical research and the field of historical education. Thus, for several years, all master's students at the Historical Faculty have been taking a semester course on "Data Science and Artificial Intelligence," which includes lectures and practical seminars. The program of regular conferences of the Association of Researchers in Historical Informatics (AHI) has included a section on "Artificial Intelligence" for several years, and the list of topics in our journal "Historical Informatics" has been updated to include the section "Artificial Intelligence and Data Science."

This note provides a brief description of the "second wave" of AI application in historical research, discussing the opportunities and risks associated with this process. This serves as a kind of preamble to a series of articles in this issue of the journal "Historical Informatics," the core of which is the issues of the current stage of integrating AI methods and technologies into the theory and practice of historical research.

* * *

Reflecting on the experience of historians using new AI technologies accumulated over the past decade, we can conditionally identify the following seven directions of such research:

1.      Recognition of handwritten and old printed texts, their transcription.

2.      Attribution and dating of texts using AI.

3.      Typological classification and clustering of data from statistical sources (particularly using fuzzy logic).

4.      Source criticism tasks, filling and enriching data, their reconstruction using AI.

5.      Intelligent search for relevant information, using generative neural networks for this purpose.

6.      Using generative networks for processing and analyzing texts and visual material.

7.      Using AI in archives, museums, and other cultural heritage conservation institutions.

Of course, other classifications can also be proposed. According to rough estimates, the first direction leads in terms of the number of publications in Russia. The fifth and sixth directions provoke more discussions (both in our country and abroad). Perhaps they are also developing faster than others, and one could even speak of an "arms race" among companies developing generative neural networks (for example, ChatGPT, DeepSeek, Grok, Gemini, YandexGPT, GigaChat, etc.). This aspect of the development of AI technologies will be touched upon more in this article in relation to historical research and education.

The "AI Era" has attracted the attention of several authoritative foreign historical journals in recent years. For instance, the leading American historical journal The American Historical Review (AHR) launched a new section in September 2023: AHR History Lab, which discusses new methods and research practices. The material in this section in the September issue of the journal is entirely dedicated to discussing the methodological and epistemological aspects of applying AI in the study of history. The publication of eight articles on the topic of AI in a single issue of this journal deserves our attention. What questions are the authors of these articles discussing at the forum "Artificial Intelligence and the Practice of History," organized by the editorial board of AHR?

The forum opens with an article by R.D. Meadows (NARA) and J. Sternfeld (National Endowment for the Humanities - NEH) titled "Artificial Intelligence and Historical Practice: A Forum" [9, p. 1345-1349]. The authors note that in a world increasingly driven by algorithms, historians must adapt to the growing flow of digitized and originally digital materials. Artificial intelligence has already proven its ability to detect patterns and identify themes in large visual and textual data sets. Current research into the capabilities of AI, including work conducted by several participants in this forum, has demonstrated how artificial intelligence reproduces and amplifies racial and gender biases as well as other hidden forms of prejudice.

According to the authors, as more history-oriented resources become available in digital formats, machine learning algorithms are becoming an increasingly sought-after tool in historical research. AI also influences the process of shaping historical consciousness. To understand AI's impact on the field of historical research, the authors suggest that it is often necessary to question fundamental concepts such as truth, evidence, and authenticity.

B. Schmidt, author of the article "Representation Learning" [10, p.1350-1353] believes that AI is already transforming historical research. As machine curation becomes more widespread and complex, it is essential for historians to better understand how we search for and organize historical and cultural information, in order to reduce the risk that comes with allowing algorithms to shape our research in opaque and unpredictable ways. To assess this changing landscape, it is necessary to specify what is actually new in this field compared to what existed ten years ago. The terms "data science," "neural networks," "artificial intelligence," and "machine learning" are often used interchangeably, but they all, particularly AI, tend to obscure what algorithms are really doing today. The increasing hype around artificial intelligence in the last decade has mainly touched on a specific area that B. Schmidt refers to as "representation learning." RL (Representation Learning) is a general strategy for transforming any type of digital object into a vector of numbers, i.e., a vector representation of objects (such as digitized texts, images, videos, graphs; essentially, deep learning can be understood as teachable vectorization of complex objects. – LB). Training such models often requires vast amounts of examples, but after training, they can quickly place any new digital object into the same "vector space" to predict some discrete outcome (e.g., what the next word in a sentence is likely to be). However, a vector ("representation" in RL) is more than just a prediction made by AI. It is the representation of an object in mathematical space, the structure of which is shaped in the process of machine learning.

In the article L. Tilton "AI and Historical Sources," the focus is on the role of historical sources in the context of the integration of AI into the expanding practice of historical research [11, p.1354-1359]. When considering the term AI as a replacement for a multitude of methods, concepts, and theories, including machine learning (ML), natural language processing, and computer vision, it must be acknowledged that the boundaries of AI are so unclear and "porous" that it is difficult to distinguish and even define them. Historians also approach the use of this term cautiously, given its connection to computational (social) sciences, significant criticisms of AI, and heated debates regarding computation in the humanities. Reading "with the flow" and against it, identifying patterns and outliers in data, as well as switching between close and distant reading, are becoming analytical techniques facilitated by the use of AI, which help to rethink our understanding of the past (it seems that a complex methodological challenge is too easily entrusted to artificial intelligence - LB). Ambitious projects like the British Living with Machines also model how interdisciplinary teams can develop new approaches to AI while simultaneously "generating new stories." There may be a temptation to think that a person does not use artificial intelligence and is not influenced by it if they do not apply tools like Mallet (a Java-based package used for text analysis, document classification, topic modeling, etc. – LB), or does not program in Python. However, according to L. Tilton, perhaps the most transformative and comprehensive change for this field is the role of AI in shaping how we organize the search for historical sources.

In the article M.L. Jones "AI in History," the dramatic process of changing concepts in the development of artificial intelligence in the early 21st century is examined [12, p. 1360-1367]. The author sees the prerequisites for this transition in the (overlooked) event of 1982, when D. Michie, a professor from Edinburgh, explained the fundamental mistake that plagued earlier attempts to create AI: "Inductive learning of concepts, rules, strategies, etc. from examples is what gives a problem-solving human their strength and versatility, not (as previously assumed) the power of computation." The minority view that saw promise in example-based machine learning in 1982 became dominant in AI by the early new millennium. In a key manifesto from 2009, extolling the "incomprehensible (unjustified) effectiveness of data," three prominent Google researchers argued that "sciences related to humans, rather than elementary particles, proved more capable of resisting elegant mathematics." (It can be said that there was a transition from deductive, theorized approaches to inductive, empirical ones – LB).

Computer scientist John McCarthy coined the term "artificial intelligence" (in 1955) originally in search of funding; by the mid-2010s, this term was radically redefined for the development of large-scale algorithmic decision-making systems and predictive machine learning trained on vast datasets. Throughout most of the Cold War, notes M. Jones, and afterward, AI researchers focused on "symbolic AI," largely ignoring data collected during everyday and military activities. Such empiricism of the mundane lost prestige compared to formal logic and numerical methods, and more empirically oriented approaches, such as neural networks and pattern recognition, were actively criticized. Learning from data seemed to be the wrong approach to creating artificial intelligence or intelligent behavior. (It is worth noting in parentheses that an important reason for the situation was that effective machine learning requires advanced computers, which emerged in the first decade of the 21st century; this factor remains in the shadow of M. Jones's argument - LB).

Alongside the dominance of the deductive, symbolic approach to AI in the USA, a much less prestigious empirical approach developed in the USSR and several other countries during the "first wave" of AI, involving a set of methods to work with large-scale military, intelligence, and commercial data. Concluding his review, M. Jones writes that our contemporary world of AI, with its algorithmic decision-making system, owes much more to this empirical line of research than to symbolic artificial intelligence, which had a higher status and was more studied.

A notable position in the forum materials is occupied by the article by J. Sternfeld “AI-as-Historian” [13, p. 1372-1377]. The author notes that the art of "immersing" artificial intelligence into a historical context is no easy task, and there may be a temptation to attribute “magical” properties to AI behavior. However, artificial intelligence works not abstractly, but rather as systems that include machine learning algorithms, software, material infrastructure, data storage, networking equipment, and human infrastructure, including those who develop the system and those who interact with it. Thus, "historization" of artificial intelligence systems, as acknowledged by the participants of this forum, requires a comprehensive examination of the system's infrastructure and the network relationships of "human-machine." Within this multitude of contexts and factors to consider, historians must understand how AI systems learn.

Today, deep learning, attributed with most modern achievements in the field of artificial intelligence, is associated with the ability to "test/experiment with datasets." But what does it mean to "learn" in relation to artificial intelligence (i.e., artificial neural networks in this context)? J. Sternfeld reduces the definition of machine learning to a performance optimization formula (i.e., minimizing recognition errors by a multilayer neural network – LB). However, such learning requires the evaluation of past events (or experiences) and making evaluative judgments based on available data. As J. Sternfeld notes, the more we explore the AI learning process, the more it begins to resemble the work of a historian (with which we can only partially agree - LB). By learning, artificial intelligence systems do much more than just improve the solution to a given task. Like historians, they gather historical data and classify, analyze, interpret, and preserve it for future use. According to the author, this complex series of actions (iterations) cannot occur "without historical awareness, which relies on memory, critical analysis, contextualization of data, and causal relationships." Sternfeld names this phenomenon “AI-as-Historian.”

The article by K. Crawford “Archaeology of Datasets” [14, p. 1368-1371] is not at all about the use of AI in archaeology, as one might think. It partially addresses issues raised in the previous article of the forum. What does it mean to historicize the materiality of artificial intelligence? What aspects should be included? These are more complex questions than they might appear at first glance. The term "artificial intelligence" is quite vague and polysemous: it can be a technology, a method, infrastructure, a set of social practices, a way of seeing. Behind every system are physical infrastructures designed by people and corresponding economic factors. The material history of AI is a vast territory with many different types of primary materials to study, extending far beyond the archives of individual technical inventors and organizations. The author recalls the labor histories of women who were among the first programmers (as noted by Jennifer Light, Marie Hicks), the role of low-paid workers who moderated content and systematized data (studied by Sarah Roberts, Mary Gray, Siddharth Suri), the histories of the transformation of data processing practices in industrial laboratories in the 20th century (works by Xiaochang Li, Mara Mills), the economic and political aspects of this history (e.g., Paul Edwards, Eden Medina), as well as its infrastructural aspects (e.g., works by Nicole Starosielski, Thomas Parker Hughes). From here, one can draw a thread to the early stages of practical AI development.

M. Brussard in his article raises the question of the challenges to preserving AI achievements [15Z, p. 1378-1381]. “They told us the Internet is eternal. That was a lie.”

The author, who has been writing "for the internet and on the internet" for over 20 years, notes that all her early works have disappeared from the web. The only place where they exist is in printouts, "lovingly stored in archival envelopes in a box" in the corner of her office. The software she once wrote has vanished, having been updated, deleted, or lost when the respective companies went bankrupt, were acquired, or simply decided to pursue something else. Several software projects died when the author decided it was no longer worth paying for their hosting. In principle, as M. Brousard writes, these digital losses are normal and natural, having little impact on the author in daily life. She is "completely happy" that her reviews of long-closed restaurants have disappeared, and is confident that the world does not need to see the programs she wrote when she was eleven. However, in her opinion, this situation is terrible for future historians, as it is part of a collective history, a collective digital history, which is not fully preserved in our memory institutions. The author leads the reader to hope for artificial intelligence that could help advance the solution to this problem.

The concluding article of the forum addresses practical issues of using AI within a project employing machine learning on historical newspaper collections [16, p. 1382-1389]. The authors (Lin-Kiat So, Liz Lourang, Chulwoo Pak, Yi Liu) have experience using artificial neural networks in OCR tasks for newspapers from the 17th and 18th centuries housed in the Library of Congress and the British Library. The article describes the problems that arise during the implementation of such projects. For example, "noise effects" (interference) are particularly prevalent when digital images are created from earlier microphotographic copies, as is often the case with historical newspaper collections. Noise effects interfere with the primary signals of the pages, affecting both human vision and computer vision and their subsequent processing. Various types of noise effects are common, including uneven brightness distribution; visible characters from the other side of the page (bleed-through); tilted document scans (skewed orientation); and markings on the newspaper (stains) that obscure text. There is a wide range of intensity for each of these effects, and the quality of the image can vary from very clean to very "noisy." Machine learning primarily offers the ability to eliminate these interferences.

* * *

Evaluating the AHR forum as a whole, it can be noted that the issues discussed by its participants cover most of the seven areas of AI use in historical research proposed by us in the first part of this article. These include conceptual questions of human-machine interaction ("the historian in the world of artificial neural networks"), the possibilities for historians to use machine learning technologies (especially deep learning), various AI tools in historical research, and the evolution of AI in the 21st century. Practical aspects were also touched upon, such as experiences with AI recognition of texts from past centuries. Interestingly, in several cases, forum participants felt it was necessary to devote attention to explaining AI algorithms, considering the large audience of the leading historical journal in the USA. Less attention was given to the possibilities and challenges faced by historians working with generative neural networks. In the year and a half since the materials of the AHR forum we reviewed were published, this area of AI has developed rapidly, garnering growing interest in the historical community.

Simplifying somewhat, one could say that the focus of the current stage of the discussion has become the question: is generative AI a virtual assistant for historians or a generator of quasi-knowledge? Considering this question requires a separate round table, which we plan to organize this year. In the meantime, let us pay attention to a more precise definition of AI, referring to the term Artificial Intelligence, introduced in 1955 by John McCarthy. In English, "Artificial Intelligence" means "artificial mind," "the ability to reason rationally." This, however, is significantly different from the usual "artificial intelligence" we are familiar with.

In this context, a relatively new trend in the development of generative neural networks is noteworthy, related to a more creative algorithm that AI uses to respond to user prompts, during which Reasoning is implemented - this is a new generation model. Upon receiving a query, the model "reasons," conducting a step-by-step search for relevant information, commenting on intermediate results, and even formulating some evaluative judgments. Anyone who has used generative AI like Grok, for example, can appreciate how much more "intelligent" and flexible the search for solutions to a given task has become. The protocol of these AI "reasonings" can span several pages. It can be said that a model of a highly informed and "considerate" virtual assistant for researchers is currently taking shape.

The question of evaluating AI as a generator of quasi-knowledge (pseudo-knowledge, unreliable knowledge) is more complex. The rapid advancement of generative AI carries certain risks. For instance, articles reflecting these risks are published in Russian journals of social and humanitarian profiles. An analysis of this phenomenon is the subject of a separate study; here we will simply mention some arguments contained in such publications.

√ The use of generative AI in the educational process at this stage "inevitably discredits accumulated humanitarian knowledge. AI creates an alternative reality that destroys the boundaries between scientific knowledge and cultural heritage on one hand, and the pseudo-cultural and pseudo-historical simulacrum created by the neural network on the other" [17, p. 215-228].

√ The experience of using generative AI shows (for instance, with ChatGPT 3.5) that evaluations of any historical event "suffer from multiplicity and contradiction." Such an "objectivist" nature of the neural network's responses "is not an advantage, as it blurs the clarity of history and the integrity of the past's picture <…>, indicating that there are many opposing interpretations of history in historical science, each considered true by a certain circle of historians; this situation is reflected in the work of the AI” [18, p. 20-26].

The arguments presented above deserve thorough discussion. Here, we will limit ourselves to some "technical" comments. Firstly, the rapid development of generative neural networks continuously improves their performance, and today, the results of their testing in the field of historical research lead to more positive assessments. Secondly, the knowledge base on which AI formulates its solutions to the tasks (queries) largely determines the nature of the responses received by the user.

Joint activity between historians and IT specialists is required here. It should not be assumed that a trained machine will do everything on its own and that the neural network will provide the historian with the "correct" result. Anticipating such problems, Norbert Wiener, the "father" of cybernetics, wrote back in the dawn of the computer age that the main advantage of the human brain as an organ of thought compared to the machines of his time is the "ability of the brain to operate with vaguely defined concepts <…>. Give to man what is human, and to the computing machine what is machine. This, apparently, should be the rational line of behavior when organizing the joint activities of people and machines" [19, p. 82-83].

The quoted passage is taken from Norbert Wiener’s book "The Human Use of Human Beings," published in 1964, shortly before his death. In 1966, this small book (104 pages on cybernetics) was published in Russian by the "Progress" publishing house and was in high demand among us, students of the faculty whose name included the word "Cybernetics." Today, nearly 60 years later, Norbert Wiener's thoughts remain relevant in a new phase of interaction between humans and machines, in the context of the rapid development of artificial intelligence technologies.

* * *

The core of this journal issue consists of six articles reflecting the experience of historians using artificial intelligence. Below, the titles of these articles are related to the AI directions proposed in this work.

Direction 7 (Using AI in archives, museums, and other institutions preserving cultural heritage).

- Yumasheva Yu.Yu. On the use of artificial intelligence in historical research.

- Mashenko N.E., Gaidar E.V. Artificial intelligence technologies in forming the archival environment: problems and prospects.

Direction 2. (Attribution and dating of texts using AI).
- Latonov V.V., Latonova A.V. Determining the authorship of "Notes of the Decembrist I.I. Gorbachovsky" using machine learning methods.

Direction 6. (Using generative networks for processing and analyzing texts and visual material).

- Voronkova D.S. Computerized content analysis of articles from the journal "Herald of Finance, Industry and Trade" for 1917: testing the capabilities of the artificial intelligence module in the MAXQDA program.

Direction 4. (Source study tasks, filling and enriching data, their reconstruction using AI).

- Mekhovsky V.A., Kizhner I.A. The world through the eyes of an educated person from the city of Minusinsk at the end of the 19th - beginning of the 20th century: distribution of the frequency of geographical names in the books of the Minusinsk Public Library.

Directions 5, 6. (Intelligent search for relevant information, use of generative neural networks for this purpose; use of generative networks for processing and analyzing texts and visual material).

- Yumasheva Yu.Yu. On the use of artificial intelligence in historical research.

- This author’s article also pertains to this direction, serving as a preface to the main theme of the issue.

References
1. Denley, P., Fogelvik, S., & Harvey, Ch. (Eds.). (1989). History and Computing II. Manchester University Press.
2. Best, H., Mochmann, E., & Thaller, M. (Eds.). (1991). Computers in the Humanities and the Social Sciences. (Achievements of the 1980s. Prospect for the 1990s). Proceedings of the Cologne Computer Conference 1988. K. G. Saur.
3. Smets, J. (Ed.). (1992). Histoire et Informatique. Ve Congres "History and Computing". Actes du Congres "Montpellier Computer Conference 1990", 4-7 Septembre 1990 à Montpellier. University of Montpellier.
4. Borodkin, L. I. (1992). Methods of artificial intelligence: New horizons of historical knowledge. Information Bulletin of the Commission on the Application of Mathematical Methods and Computers in Historical Research of the Russian Academy of Sciences,
5. EDN: IYBCLC. 5. Lukov, V. B., & Sergeev, V. M. (1983). Experience in modeling the thinking of historical figures: Otto von Bismarck, 1866–1876. In Questions of Cybernetics. Logic of Reasoning and Its Modeling.
6. Khamrov, Yu. E. (1992). Hydronymikon-expert system on hydronymy of the East European plain. Information Bulletin of the Commission on the Application of Mathematical Methods and Computers in Historical Research, 5.
7. Kovalchenko, I. D., & Borodkin, L. I. (1988). Two paths of bourgeois agrarian evolution in European Russia: An essay in multivariate analysis. The Russian Review, 47(4).
8. Borodkin, L., Lazarev, V., & Zlobin, E. (1993). Applications of OCR in Russian historical sources: A comparison of various programs. In Optical Character Recognition in the Historical Discipline. Scripta Mercaturae Verlag.
9. Meadows, R. D., & Sternfeld, J. (2023). Artificial intelligence and the practice of history: A forum. The American Historical Review, 128(3).
10. Schmidt, B. (2023). Representation learning. The American Historical Review, 128(3). https://doi.org/10.1093/ahr/rhad363
11. Tilton, L. (2023). Relating to historical sources. The American Historical Review, 128(3). https://doi.org/10.1093/ahr/rhad365
12. Jones, M. L. (2023). AI in history. The American Historical Review, 128(3). https://doi.org/10.1093/ahr/rhad361
13. Sternfeld, J. (2023). AI-as-historian. The American Historical Review, 128(3).
14. Crawford, K. (2023). Archeologies of datasets. The American Historical Review, 128(3). https://doi.org/10.1093/ahr/rhad364
15. Broussard, M. (2023). The challenges of AI preservation. The American Historical Review, 128(3). https://doi.org/10.1093/ahr/rhad366
16. Soh Leen-Kiat, Lorang, L., Pack, C., & Liu, Y. (2023). Applying image analysis and machine learning to historical newspaper collections. The American Historical Review, 128(3).
17. Ippolitov, S. S. (2024). Artificial intelligence as a destructive factor in humanitarian education, historical science, and creative industries: Towards problem setting. New Historical Herald, 3. https://doi.org/10.54770/20729286_2024_3_215
18. Gerasimov, G. I. (2024). What history is written by artificial intelligence? History and Modern Worldview, 6(1). https://doi.org/10.33693/2658-4654-2024-6-1-20-26
19. Wiener, N. (1966). The creator and the robot.

Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

Review of the article "Historian in the world of neural networks: the second wave of artificial intelligence technologies" Artificial intelligence (AI) technologies play a significant role in modern scientific research and education. This fully applies to the field of history. The reviewed article aims to provide a brief description of the content of two waves of the use of AI concepts and methods in the practice of historical research. The first of them dates back to the 1980s and 1990s. The second wave began in the 2010s. The author analyzes the structure of the second wave and for the first time identifies seven main areas of AI implementation in historians' projects.: 1) recognition and transcription of handwritten and old-printed texts; 2) attribution and dating of texts using AI; 3) typological classification and clusterization of statistical source data (in particular, using fuzzy logic); 4) source research tasks, replenishment and enrichment of data, their reconstruction using AI; 5) intelligent search for relevant information information, the use of generative neural networks for this purpose; 6) the use of generative networks for processing and analyzing texts and visual material; 7) the use of AI in archives, museums and other cultural heritage preservation institutions. At the same time, the author of the article makes an important observation: the first direction is the most in demand at this stage, while the fifth and sixth directions are the most controversial. These discussions are also reflected in the pages of the most authoritative American historical journal, the American Historical Review, which recently published 8 articles on this topic in one of its issues. An aspect of the content of these articles noted by the author is of interest, related to the interpretation of the transition from the first wave of AI to the second: the explanation boils down to the fact that if the deductive, mathematized approach dominated the first wave, then the second wave focuses on an inductive, empirical approach based on machine learning using large training samples. The author provides his own explanation of this wave change here, linking it with the radical increase in computing power of new generation computers. The main issues raised in the current discussions relate to the possibilities and limitations of using generative neural networks. The dilemma is considered – whether generative AI will be a virtual researcher's assistant or a quasi-life generator. As an advantage of generative AI, the author highlights its ability to "reason" by conducting a step-by-step search for information. At the same time, the article notes certain risks of using such tools in the practice of historical research. The article provides examples drawn from recent publications in Russian journals criticizing these approaches. We are talking about the use of generative AI in the educational process, during which AI creates an alternative reality. In the research process, critics point out that estimates of any historical event obtained with the help of AI "suffer from multiplicity and inconsistency." According to the author of the article, this situation is largely determined by the imperfection of existing versions of generative AI, which is associated with current implementations of this tool (although a number of the problems discussed reflect the current state of the historiographical process). As noted by the author, this article anticipates a series of publications that form the core theme of the journal's issue. In our opinion, she successfully fulfills her task by involving, for the first time, an extensive historiography systematized by the author on the use of AI by historians in scientific research and the educational process. The article is written in a good academic style, its novelty and relevance are beyond doubt. The analysis of the problems presented by the author will undoubtedly find an interested readership. The article can certainly be recommended for publication in the journal Historical Informatics.