Translate this page:
Please select your language to translate the article


You can just close the window to don't translate
Library
Your profile

Back to contents

National Security
Reference:

Battlefield informatization: some possibilities and possible problems

Tikhanychev Oleg Vasilyevich

ORCID: 0000-0003-4759-2931

PhD in Technical Science

Oleg V. Tikhanychev, Deputy Head of the Department of Advanced Development Management, "Technoserv Group"

13 Yunosti str., Moscow, 111395, Russia

to.technoserv@gmail.com
Other publications by this author
 

 

DOI:

10.7256/2454-0668.2024.2.36142

EDN:

BSVOXP

Received:

21-07-2021


Published:

04-05-2024


Abstract: The subject of the study is informatization in military affairs, the object of the study is possible ways to control the enemy through intentional and targeted distortion of information about their troops, goals and conditions of action. The author examines in detail such aspects of modern warfare as multi-domain, with a significant shift in emphasis in the sphere of information warfare. As the experience of local wars and armed conflicts of the last decade shows, the forms of their conduct differ significantly from the "classic" wars waged until the end of the XX century. The informatization of the battlefield made it possible to switch to such form of use of troops (forces) as network-centric actions. But the expansion of capabilities, the improvement of technologies of warfare, in accordance with the laws of dialectics, generates certain vulnerabilities. These vulnerabilities manifest themselves in the same area where the strengths are located – in the field of informatization of military operations. Based on this, special attention is paid to assessing the impact of informatization on the adequacy of decisions, primarily in terms of possible vulnerabilities of the management process. Using general scientific research methods: analysis and synthesis, the author, based on the analysis of the main trends of modern armed confrontation, synthesized a description of possible vulnerabilities when using automated control systems, formulated the task of developing a mathematical apparatus for the formation and evaluation of the reliability of information about the opposing systems. This is not about physically changing information in the automated control systems of the enemy: this task is technically difficult, but relatively simple from the point of view of algorithmization. And not about the trivial misleading of a decision-maker, as it has been done many times before. The article deals with the situation of decision-making management through the management of the directed submission of information using vulnerabilities by its automated processing in the automated control system.


Keywords:

informatization of the battlefield, automation of management, intentional misrepresentation of information, targeted information impact, information confrontation, command and control of troops, decision making, modern military concepts, management through information, misleading

This article is automatically translated.

Introduction

In accordance with the laws of dialectics, the principles of warfare are changing under the influence of newly emerging factors, as science and technology develop. Nowadays, such factors include the informatization of military operations, the continuous increase in the security and individual fire capabilities of individual servicemen with a simultaneous increase in the accuracy and power of long-range weapons, a cardinal increase in the share of high-precision weapons in combat [1] and the emergence of a new type of it, which can be defined as "selective" or "precision" weapons, striking not just an object, but a specific element of it, and even carrying it out along a given trajectory, robotization of the battlefield and a number of others. These factors significantly expand the capabilities of the warring parties, increase the dynamism of warfare, increase the effectiveness of the use of troops (forces) and weapons, change the requirements for logistics and management system [2,3,4]. All this is especially evident in the influence of the informatization factor, which is the basis for the development of most of the above innovations. At the same time, as historical analysis shows, any innovations, as a rule, serve as a source of new problems and threats, the greater the scale of these innovations [5,6]. This situation makes it urgent to identify emerging threats in order to form possible ways of their localization.

 

1. Some features of the influence of informatization on the management of military operations

So, one of the most obvious features of the organization of modern warfare is the informatization of the battlefield, which affects almost all aspects of warfare and even formed their separate sphere – the war in cyberspace. The Multidomain battle structure (MDB), which has developed as a result of expanding the scope of action, clearly shows the essence and content of the possible advantages and potential problems of conducting combat operations in modern conditions. As the analysis shows, a significant part of these problems is determined by the processes of obtaining and processing information.

In the most obvious form, the manifestation of the influence of informatization is manifested in the following processes:

- automation of data collection and information processing by technical means of intelligence and management;

- generalization and aggregation of data based on information obtained from intelligence tools, as well as general and dual-use systems [7] using big data technologies;

- automation of the development of management decisions and forecasting the consequences of their implementation.

In practice, this influence is determined by the following factors.

Firstly, obtaining information about the enemy and external conditions in the conduct of modern warfare is largely carried out by technical means, and data processing is increasingly carried out programmatically, with minimal human participation. If earlier almost all information, even received without human intervention, was processed by operators, then with the explosive growth in the volume and dynamics of the data received, most of the information is processed and aggregated by software and hardware complexes that give the operator only the results of the analysis. In most cases, without explaining the algorithms for choosing and forming preferences.

Secondly, the very formation of a "virtual model" of combat areas presented to a decision-maker (LPR) is carried out by the relevant groups of headquarters (for example, in the NATO armies - Visual Display unit, VDU) using software and hardware complexes that include automated collection, software analysis, creation, visualization and maintenance the models of managed systems and the environment of their functioning on a wide variety of interfaces are up to date. The visualization algorithms used in this case remain hidden from the user.

Thirdly, more and more technical reconnaissance and strike weapons, most often unmanned, are equipped with their own vision systems, ensuring their use both in controlled and autonomous mode. Systems that are built on their own, rather complex and, consequently, potentially vulnerable algorithms.

All these systems and tools, as noted earlier, process the received information according to predefined algorithms, give it to operators or use it as a source for calculations and modeling to develop solutions. The dynamism of modern warfare, the volume of incoming information and the complexity of controlling the sources of its receipt give rise to a potential problem – information can be distorted, accidentally or intentionally. Accidental distortions and errors resulting from them are practically the norm in technical systems, an inevitable evil that can be successfully combated, for which there is a wide range of mathematical methods. But intentional distortion of data, carried out purposefully, is a problem that can be all the stronger the more information is obtained from software and hardware. Intentional distortions may not be obvious, which makes it difficult to counteract them. Moreover, potentially, this problem can create an opportunity to control the behavior of the opposing side through purposeful distortion of information.

The problem under consideration is reflected in the form of two components of the management process, determined by the degree of human participation in the management process:

- management of technical systems, both autonomous and with the participation of a human operator, carried out, as a rule, on the battlefield and in areas of application;

- management of large man-machine systems, heterogeneous military formations and groupings of troops.

Both of these directions are not trivial and have not been sufficiently investigated at the moment.

But if active research is being conducted on the first of them as part of the development of robotics, primarily autonomous, then there is much less information in the open press on the second, which serves as an indicator of its weak research. At the same time, this component of the management process is also extremely relevant, which is determined by the active introduction of artificial intelligence (AI) components into the management of human-machine (ergatic) systems.

The article proposes to consider precisely the second aspect of ensuring the reliability of management in conditions of intentional distortion of information, namely, the management of human-machine systems implemented using automated control systems (ACS).

 

2. About the potential of information management

Once again, we recall that the article does not consider the problem of information distortion by implementing software and hardware effects on automated special purpose control systems (ACS), leading to the destruction or modification of information in them. This is a topic of a separate study, very important and relevant, but not considered in the framework of this article [8]. The article proposes to consider an equally complex, but also no less effective option for purposeful distortion of information that is incoming for the control system, extracted in order to create an environment necessary for further assessment and decision-making, without direct intervention in the work of the components of the automated control system of the opposing side.

To begin with, it is worth noting that most of the approaches listed in the first section to obtaining and processing information during the management of military operations have been used before. For example, General Shtemenko also mentions the use of additional information from third-party sources for decision-making in the book "The General Staff during the War." In June 1941, in the absence of reliable information about the advance of the Wehrmacht, officers of the operational directorate of the General Staff called the directors of collective farms in order to determine, at least in general terms, the configuration of the line of contact by answers or lack of communication.

At the same time, even before the introduction of computers into the management of troops, mathematical methods of aggregating intelligence and forecasting the development of the situation were also used.

The distortion of information and the development of measures to mislead the enemy were also used in the planning of military operations for quite a long time, from the very beginning of the emergence of the art of war. Back in the 5th century BC, the famous philosopher Confucius noted that "war is a path of deception."

The difference is that at the initial stage of the development of military art, it was about tactical deception of the enemy, the use of various kinds of false actions to mislead him. Later, with the increase in the scale of hostilities, when the commander could no longer personally survey the battlefield, the ways of misleading the enemy became more and more diverse:

- formation of false objects;

- masking of objects and their actions in various spheres and ranges;

- imitation actions, etc.

Thus, the methods of misleading the enemy at all stages of the development of the art of war were determined, first of all, by the state of the means of reconnaissance and camouflage, and depended on the methods of command and control used. While the battlefield could be viewed visually, they were simpler in content and unambiguous in the object of impact. With the increase in the scale of hostilities, the commander and staff had to build a virtual model of a controlled grouping, an enemy grouping and an environment of confrontation based on the collection of situation data. Accordingly, new methods of counteraction and deception appeared, developing in content and object from camouflage on the battlefield to operational camouflage.

But, what is important within the framework of the problem under consideration, all these methods were developed exclusively by man and against man.

The implementation of these methods is quite clearly described in the guiding documents of the US Armed Forces on the methodology of planning to mislead the enemy (Military Deception Planning Methodology), which is based on the structure of a typical management cycle (the Boyd cycle in American terminology) [9]. The OODA cycle (the Observe - Orient - Decide - Act control loop) within the framework of this concept has been modified to the form of a two-circuit interconnected "loop" (Fig.1):

- enemy control circuit: reaction assessment (See), guess (Think), implementation of the plan (Do);

- the outline of the enemy's actions based on the information he receives: registration (See), reflection (Think), action (Do).

 

 

Fig.1. A two-circuit "loop of control" of actions through disinformation

 

In the presence of proven practical effectiveness, the algorithm shown in Figure 1 does not fully ensure the effective use of factors arising from the total informatization of the battlefield.

Firstly, in it, at all stages (disinformation, prediction and assessment of the degree of compliance of the enemy with the plan of disinformation), for all methods used (false maneuvers, demonstrative actions, attracting attention and ostentatious actions, influence through the media and social networks), the impact on a person, on the managers of the opposing side, is considered exclusively as collecting and analyzing information, making decisions based on it.

Secondly, to make decisions in the implemented by this algorithm, subjective logical and analytical methods are used, implemented by a person, and not by software tools to support decision-making.

NATO documents implementing this methodology in management practice [8,9] define information warfare as one of the main components of the confrontation, part of which are cognitive effects on human rights.

At the same time, the increased dynamism of military operations provided by informatization, the coordinated use of distributed forces and their efforts within the framework of network-centric actions determine more stringent requirements for the efficiency of all stages of the control cycle (OODA loops in the terminology of the enemy): data collection, processing, planning, communicating commands, and monitoring their execution [10,11,12]. This, in turn, leads to an increase in the degree of automation of management, to the development of new, more efficient algorithms for converting information with less and less human participation in them, as the slowest link in the data processing process.

An example is the use of specialized neural networks for military purposes in the armed Forces of militarily leading states, for example, in the Israeli army. Information has appeared in open sources that the IDF currently uses an AI-based situation recognition and target distribution system "Gospel" (aka "Habzaroy" or "Good News", Hebrew: . ............), as well as a program for planning air strikes Fire Factory. These systems not only identify and determine the targets of strikes, but also ensure the calculation of ammunition and the formation of a strike schedule, which increases their effectiveness. This, theoretically, expands the possibility of accidental and intentional distortion of the information that serves as the source for the use of such systems. In the latter case, for use in order to control the actions of the opposing side in their own interests.

Thus, with the informatization of combat operations and automation of their management, the development of technical means of intelligence, data collection and processing, new opportunities for misleading appear: the object of which was no longer only a person, but also technical and software decision support tools, elements of their information and algorithmic support.

Based on this, a new methodological apparatus for planning information impact is needed, taking into account the modern features of the informatization of military operations.

When developing a new methodological apparatus, given the development of automated control systems and AI in military affairs, it is illogical not to provide methods of influencing decision-making components of AI that are potentially no less vulnerable than human thinking.

The situation is determined by the fact that with the expansion of the use of artificial intelligence components in automated control systems, the possibility and range of methods of misleading the enemy is potentially expanding, supplemented by methods of deception of intelligent algorithms. The expansion of the use of AI components in the automated control system potentially provides another vulnerability factor – as a rule, the developer of the automated control system does not create an AI core using a ready-made "blank". The list of these blanks is not so large. For example, to work with neural networks, these are network services: ChatGPT of different versions, Gemini, Baidu ENRIE, YOU, Midjourney, YaLM 100B, Open-Assistant, GitHub Copilot, Aimyvoice, ruDALL-E, Brax, Imagen and others, each of which is designed to solve its own class of tasks: information retrieval, text generation, sound, pictures, or video. As a result, a potential problem arises, which has already been mentioned – any neural network works on statistical data, processing them, interpreting and forming a new result based on them. But statistics are known not only by the planning side, but also by the opposing side. That is, this "game" can be two-sided: knowledge of the features of the functioning of such programs potentially allows, having some sample, to understand the principles of the algorithms used in AI "engines", potentially implemented by the enemy's automated control system software. This situation does not make the task trivial, but on the contrary, relevant, albeit difficult, primarily due to uncertainties in the source data.

But despite the complexity, this task is solvable. Some practical developments on the creation of means of misleading artificial intelligence components are already known. In this aspect, there is an example of the development of special drawings based on the analysis of vulnerabilities of neural network image recognition algorithms, due to the implementation of which an object normally distinguishable by a person is not recognized by hardware and software surveillance complexes [13].

Moreover, on the Internet you can already find offers containing ready-made command requests (Prompt) in the format, for example, ChatGPT, automatically transferring this neural network to other modes of operation. A simpler and "direct" version of the impact on artificial intelligence was demonstrated by protesters from the Safe Street Rebel organization in the United States – they forced unmanned vehicles to stop by placing traffic cones on their hoods: the software of the drones perceived the situation as a jam and turned off the engine.

These are illustrative examples of some kind of analogue of "neuroleptic programming" for neural networks. And also, a direct confirmation that the technology of remote control of the formation of AI conclusions, without interfering with the program code, is technically possible.

All of the above are special cases, but quite indicative. There is a possibility that similar measures can be implemented on a larger scale. And this imposes additional requirements both for the protection of algorithms and for the development of such tools.

Given the current situation, now and in the foreseeable future, with the growth of informatization and robotization of the battlefield, new opportunities are opening up for such impacts, they can take on a completely different scale and content:

- on the one hand, the probability of obtaining intentionally distorted information by technical means and accepting it as true is growing;

- on the other hand, software tools and methods of processing "big data" (Big Data) generate new opportunities for the formation of such distortions.

Once again, we recall that the article is not about the "classic" misleading, but about new technologies that potentially allow, through the management of the supply of information used by software and hardware complexes for data collection and processing, to form intentional distortions of the situation. The distortion is precisely calculated, ensuring that the enemy makes pre-planned decisions that are ineffective for him.

 

3. The general formulation of the problem

Thus, in the conditions of total informatization of military operations, a two-pronged scientific and practical task arises

Management of information received at the input of AI systems used in automated control systems.

This task is logically divided into a number of specific subtasks:

assessment and management of the information collected on the composition and actions of the opposing parties, the solution of which should ensure:

1) within the framework of conducting an information confrontation, when planning and making decisions at his behest:

- to predict the content of information that the enemy may know about our troops (forces) and on the basis of which they make decisions;

- to form proposals for the deliberate distortion of information coming to the enemy through various channels, to ensure his motivation for predetermined actions;

- predict the results of the implementation of the generated proposals.

2) as part of the protection of its own control system, in the interests of evaluating incoming information – to determine the correctness of incoming information about the enemy, its troops and the conditions of combat operations, promptly and continuously analyze it for unintended and intentional distortions;

The specific content of each task depends on the level at which it is being solved;

- on the battlefield, at the tactical level, the main object of influence will be technical and human-machine controls, intelligence systems and other means of obtaining information. The methods of influence will be mainly visual, including the use of augmented reality (AR) tools;

- at the level of operations, where the main object of information impact will be the headquarters that ensure the processing of the information received and the formation of solutions based on it, presented by the LPR, that is, information management systems, taking into account the specifics of the technologies and algorithms used in them;

- At the strategic level, these methods are complemented by the impact of providing specially prepared information through the media, social networks and other components of the Internet.

In terms of directly influencing AI at the tactical level, there is, for example, the task of misleading the LPR by submitting false information with the generation (substitution) of voice, image, as well as the inverse task of protecting against such actions. It can be noted that in everyday life such situations already occur and are known as "deepfakes". Software tools for their implementation have also been created, such as, for example, the OpenVoice voice simulation neural network. It is quite logical and expected to interpret such technologies in the military field. It can be noted that with a sufficiently high technical complexity, from a methodological point of view, these tasks are trivial. Moreover, ready-made technologies are already available to solve them.

With this in mind, despite the importance of such tasks, they are not considered in this article.

Much more difficult from a methodological point of view are the tasks of controlling the actions of the enemy through the submission of modified information at the operational and strategic level. This non-trivial task is naturally divided into subtasks for the formation of the information provided, forecasting its impact on the enemy and analyzing the results, which can be solved both by heuristic methods and using precise mathematical methods, including those currently known, although not used to solve problems in the formulated formulation.

It can be noted once again that methods of increasing the reliability of information about objects and enemy actions have long been known and used. As well as methods of misleading the enemy. However, it should be noted that existing methods provide work with information coming from people and intended for people, which in modern concepts is defined as "cognitive impact" [9]. In conditions of total information and robotization, this becomes insufficient. And the deeper the processes of informatization, robotization and the use of artificial intelligence are introduced into the sphere of armed confrontation, the less effective the existing methods of influencing information become, which makes the formulated scientific and practical task extremely relevant. This task must be solved without fail by any army claiming to be effective in the modern world.

How, in the future, can such a methodology be used? At least two directions of its application seem likely:

- development of a "digital portrait" of the system of warring parties and the environment of confrontation in the interests of solving problems, methods of modifying information and managing it;

- formation of a system to counter similar actions of the enemy.

Both components of the formulated problem are very difficult to implement in practice.

The second component of the formulated problem, which at first glance seems simple and has been solved for a long time, is actually not so trivial. Yes, it is already being solved by proven methods: determining the reliability of information by confirmation from several independent sources, mathematical methods of filtering information: from the simplest clipping of abnormal results to computational methods, for example, using concordance coefficients. But, for working with intentionally and difficult distorted information, these methods may not work with a high probability. Then, to solve the problem, it may be necessary to use more modern approaches, for example, based on artificial intelligence. In practice, similar approaches have already been used and tested in other fields of activity. Examples include intellectualized image processing using adversarial models such as Clipped BagNet and ResNet, the effectiveness of which has been repeatedly confirmed in theory and in practice, approaches using attacks with adversarial stickers of various formats [14] or regularization of incoming information based on interpretability [15]. There are quite a lot of approaches to ensure resistance to attacks on artificial intelligence components, and it is likely that any of them or a combination of them can be implemented to effectively counteract management through information distortion. The most important thing about the topic discussed in the article is that such approaches exist and are developing, the task is to choose the most effective and interpret them to the conditions of application.

It seems more difficult to implement the first component of the formulated problem, which requires fundamentally new approaches to the work of management bodies and new (modified) mathematical methods, that is, it requires the development of methods for its application and the selection or development of specialized tools.

From the point of view of the problem statement, as part of the application of such a technique, it is necessary to solve the inverse problem: knowing or assuming the algorithm of the enemy's actions as a "black box", to form such a set of input data in order to obtain the required set of outputs that determine the behavior of the enemy.

Actually, a fairly wide range of available mathematical tools can be used to solve the formulated problem – from iterative to optimization. The main problem is the formalization of the situation, which allows you to choose and use the necessary mathematical apparatus to solve the problem.

To provide formalization, as one of the most obvious options, an approach with the so-called "Hoare triple" can be used, which describes the preconditions and postconditions that ensure the operation of any finite algorithm:

{P}Q{S},

where P are the preconditions that must be fulfilled before starting the task to solve Q;

R are the postconditions that are true after the completion of the task implementing the algorithm, that is, the desired solution

In the context of solving the problem of controlling enemy actions through information, the set of Q is a set of algorithms for technical means of reconnaissance and control of the enemy, ensuring the formation of a "digital portrait" of the grouping of our troops in the conditions of its functioning. And the set P is a data matrix consisting of an unchangeable part and an array of information that must be formed to create a "digital model" that provides the necessary behavior of the enemy. The result of solving the formulated problem is the formation of a variable part of the set R.

However, this is only one of the possible approaches to formalizing the process of finding a solution, it is quite possible to choose any other suitable in terms of parameters.

Further, as noted earlier, in the presence of an algorithm, an important part of solving such a problem is the choice of tools for its implementation.

An analysis of the requirements for such tools shows that its main properties should be:

- ensuring the promptness of obtaining solutions;

- the ability to work with incomplete information;

- operational accounting of rapidly changing factors.

Such properties are possessed by software and technical systems built on the basis of predicative analytics, based on the collection of statistics of previous states (scenario analysis, Scenario analysis) and on machine learning technologies (Machine Learning - ML). These technologies can be implemented, for example, on the basis of the mathematical apparatus of neural networks trained with a teacher, or self-taught (Unsupervised Learning). The choice of possible tools is quite wide, the choice of a specific one will be determined by the user's requirements and the developer's preferences.

Thus, it can be noted that already at the present time, the formulated task has solutions, appropriate methods and tools, only the problem of their assessment and selection has not been solved.

Having determined the theoretical possibility of solving the problem of influencing AI as part of the automated control system, we recall again that misleading the enemy has always been part of tactics, operational art, and strategy.

In the history of wars and military art, there are examples of information management that are similar in nature, but still intuitively planned: an information company on the unpreparedness of the coalition forces for the attack on Iraq in 1991 [16]. Perhaps there are signs of the use of such technologies in the course of conducting "hybrid" conflicts of the last decade, but it is too early to judge specifically.

However, as already noted, these are examples of actions aimed at deceiving a person or a group of people, although carried out through the use of information technology. The article sets a more "multi-layered" task, the impact on the technical means of obtaining, processing and communicating information, which should form the necessary information picture for their users.

The task, as well as the proposed approaches to its solution, is formulated for the first time and described in the article in the most general form.

And if earlier deception of the enemy was an art designed for humans and aimed at humans, now it can become a science that forms human data based on specialized techniques (LPR) through the impact on software and information components of control systems, primarily using AI. And the "classical" methods, of course, will continue to be used. Moreover, their effectiveness can potentially increase due to the synergistic effect, when used in combination with the approach proposed in the article.

The relevance of the formulated task can be confirmed by changes in the information warfare process, which are reflected in the NATO conceptual documents on the development of armed forces and combat operations being developed: NATO Warfighting Capstone Concept (NWCC), NATO Artificial Intelligence Strategy, Joint Intelligence, Surveillance and Reconnaissance (JISR) and others. As part of this trend, for example, in the NATO Armed Forces, electronic warfare units and formations are being reorganized in terms of "cyber warfare", which implies a broader functionality, a transition from influencing data exchange channels to making it difficult to use information, influencing it directly. The next logical step in the development of this process is the emergence of functionality for modifying information and managing its supply. This step has not yet been taken, but readiness for the emergence of a new situation should be formed now.

Based on this, the task formulated in the article is not only relevant, but also extremely timely.

 

Conclusion

The modern world is actively changing, and these changes naturally affect the conduct of armed confrontation. To focus in the current conditions on the use of cognitive methods of information warfare aimed only at human decision–making means to condemn oneself to defeat in advance.

Based on this, we can conclude that the formulated task is relevant, based on the integrated use of the following methods of information warfare:

- cognitive, but not "classical" direct ones, but by influencing intermediate software and technical means of information processing - "intermediaries", managing information processed by control automation and AI, which, in turn, prepare information for human decision-making;

- information, but directed not at a person, but influencing AI-based control systems;

- the use of classical methods of misleading, taking into account the results of the application of the two above.

What will give rise to the introduction of the proposed methods: a change in the content of conducting operations, the emergence of a new type of combat support, or will these features simply change some components of existing forms of armed struggle? Or maybe there will be a way to overcome the "positional crisis" in modern warfare, defined by many experts as a result of a radical increase in information awareness of the warring parties? Practice will probably help you make a final conclusion.

However, the examples discussed in the article do not exhaust the application of management methods through information modification. Theoretically, they can be used in any field where there is a confrontation: armed, economic competition, information warfare and others.

In any case, the scientific and practical task formulated in the article and proposals for its solution are important and relevant, and the solution of this task will provide, among other things, answers to modern challenges posed by the informatization of the battlefield, as well as the development of possible options for action on them.

References
1. Litvinenko, V., & Dolmatov, V. (2023). High-precision artillery means of fire. Army collection, 7, 26-33. https://army.ric.mil.ru/upload/site175/DMJR9s055h.pdf
2. Matsulenko, V.A. (1975). Operational camouflage of troops (according to the experience of the Great Patriotic War). Moscow: Military publishing house.
3. Orlyansky, V.I. (2007). Operational camouflage or deception of the enemy. Military Thought, 7, 38-45.
4. Beketov, A.A., Belokon, A.P. & Chermashentsev, S.G. (1976). Masking the actions of ground troops. Moscow: Military Publishing House.
5. Grassegger, H., & Krogerus, M. (2016). Ich habe nur gezeigt, dass es die Bombe gibt. Das Magazin [DX Reader Version]. Retrieved from https://www.dasmagazin.ch/2016/12/03/ich-habe-nur-gezeigt-dass-es-die-bombe-gibt/?reduced=true
6. Brown Moses How I Accidentally Became An Expert On The Syrian Conflict. Sabotage Times. Jul 20, 2013. Retrieved from https://sabotagetimes.com/life/how-i-accidentally-became-an-expert-on-the-syrian-conflict
7. Tikhanychev, O.V. (2019). Internet intelligence as one of the threats of the information age. Security Issues, 2, 24-33. doi:10.25136/2409-7543.2019.2.26787
8. Williams, Brad D. (2021). NSA Renews Focus On Securing Military Weapons Systems Against ‘Capable’ Rivals. Breaking Defense, 10. Retrieved from https://breakingdefense.com/2021/10/nsa-ups-focus-on-securing-weapons-systems-amid-capable-multipolar-rvials
9. JP 3-13.4 (2017). Military Deception. Joint Chiefs of Staff.
10. Chekinov, S. G. & Bogdanov, S. A. (2017). Evolution of the essence and content of the concept of “war” in the 21st century, Military Thought, 1, 30-43.
11. Bartosh, A.A. (2019). Hybrid War Model. Military Thought, 5, 6-23.
12. Pershin, Yu.Yu. (2019). Hybrid Warfare: Much Ado About Nothing. Security Issues, 4, 78-109. doi:10.25136/2409-7543.2019.4.30374
13. Tikhanychev, O.V., & Tikhanycheva, E.O. (2023). Ensuring the secrecy of objects in the conduct of armed conflicts of varying intensity, as an important aspect of their protection: history, state, development of the process. Security Issues, 4, 126-151. doi:10.25136/2409-7543.2023.4.39371
14. Kurdenkova, E.O., Cherepnina, M.S., Chistyakova, A.S., & Arkhipenko, K.V. (2022). Effect of transformations on the success of adversarial attacks for Clipped BagNet and ResNet image classifiers. Proceedings of the Institute for System Programming of the RAS (Proceedings of ISP RAS), 34(6), 101-116. doi:10.15514/ISPRAS-2022-34(6)-7
15. Chistyakova, A., Cherepnina, M., Arkhipenko, K., Kuznetsov, S.,  Oh, C.-S., & Park, S. (2021). Evaluation of interpretability methods for adversarial robustness on real-world datasets, Ivannikov Memorial Workshop (IVMEM) (pp. 6-10) Russian Federation: Nizhny Novgorod. doi:10.1109/IVMEM53963.2021.00007
16. Gorokhov, R. Yu. (2022). Development of the theory and practice of camouflage in the US military. Military Thought, 8, 147-156.

Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

The article presents an in-depth analysis of the impact of information technologies on modern military operations, with an emphasis on their advantages and potential risks. The author uses a combination of historical analysis, modern examples and theoretical discussion to explore the evolution of military tactics influenced by technological advances, including the integration of artificial intelligence and big data into military strategy. The topic is relevant in the modern digital world, especially in the context of the intersection of information technology and military operations. The article stands out for its scientific novelty, offering an original look at the benefits of informatization and critically assessing potential vulnerabilities and challenges. The style is academic, the structure is well organized, and the content is suitable for a scientific audience. The extensive bibliography reflects the author's deep immersion in the topic. The author effectively examines possible counterarguments, especially regarding the reliability and safety of technology in war. The article provides an interesting analysis of the modification of the OODA cycle within the framework of modern military strategy. The author adapts the traditional concept of OODA (Observe - Orient - Decide - Act) to the conditions of information warfare, presenting it in the form of a two-circuit interconnected "loop". The first circuit describes the management of the enemy through the assessment of his reaction (See), the formation of assumptions (Think) and the embodiment of the plan (Do). The second circuit reflects the actions of the enemy based on the information they receive, including registration (See), reflection (Think) and execution of actions (Do). This modification of the OODA cycle highlights the complexity and interconnectedness of information warfare in the modern world. In addition, the author touches on the extremely relevant topic of vulnerability of automated control systems (ACS) in the context of the use of artificial intelligence (AI) components. He deeply analyzes how the increased use of AI in automated control systems can lead to new methods of deception and deception, both on the part of the enemy and in terms of protection. This coverage of potential vulnerabilities caused by a limited selection of basic AI components represents a significant contribution to understanding modern military strategies and information security. Such an analysis highlights the complexity and versatility of the problem, making the study particularly valuable and relevant. The conclusions emphasize the need for constant adaptation of military strategy to overcome the challenges associated with informatization, which makes the article interesting for readers interested in military strategy, information technology and security research. As a development of this work, we can propose the integration of additional real-world examples of the use of artificial intelligence for military purposes, which will strengthen the argument and provide a deeper analysis. Highlighting specific AI technologies in military automated control systems will enhance understanding of their role in military strategy. An important aspect will also be the study of strategies to counter threats related to AI. The inclusion of ethical and social aspects of the use of AI in the military sphere will enrich the work from the point of view of multidisciplinarity. In addition, describing potential future trends and scenarios in AI-based military technologies will help readers better understand the possible consequences of their use. In general, the article presents a thoughtful analysis of the intersection of technology and military operations, highlighting important aspects for modern warfare strategy.