Translate this page:
Please select your language to translate the article


You can just close the window to don't translate
Library
Your profile

Back to contents

Security Issues
Reference:

Analysis of modern intelligent methods for protecting critical information infrastructure

Nikitin Petr Vladimirovich

ORCID: 0000-0001-8866-5610

PhD in Pedagogy

Associate Professor; Department of Artificial Intelligence; Financial University under the Government of the Russian Federation

49 Leningradsky Ave., Moscow, 125993, Russia

pvnikitin@fa.ru
Other publications by this author
 

 
Gorokhova Rimma Ivanovna

PhD in Pedagogy

Associate Professor; Department of Information Technology; Financial University under the Government of the Russian Federation

49 Leningradsky Ave., Moscow, 125167, Russia

rigorokhova@fa.ru
Other publications by this author
 

 

DOI:

10.25136/2409-7543.2024.3.69980

EDN:

EXGKAV

Received:

27-02-2024


Published:

24-09-2024


Abstract: Critical information infrastructure (CII), including the financial sector, plays a key role in ensuring the sustainable functioning of economic systems and the financial stability of States. However, the growing digitalization of the financial industry and the introduction of innovative technologies are opening up new attack vectors for attackers. Modern cyber attacks are becoming more sophisticated, and traditional defenses are proving ineffective against new, previously unknown threats. There is an urgent need for more flexible and intelligent cybersecurity systems. Thus, the subject of the study is modern intelligent methods and technologies for protecting critical information infrastructure (CII) from cyber attacks. The object of the research is methods and means of ensuring the protection of critical information infrastructure using artificial intelligence and machine learning technologies. The methodological basis of this study is a comprehensive analysis of the scientific literature on the use of intelligent methods and technologies to protect critical information infrastructure. During the review and critical analysis of relevant scientific publications, key problems and unresolved tasks requiring further scientific research and practical developments in this subject area were identified. This methodological approach allowed us to form a holistic view of the current state and prospects for the development of intelligent cybersecurity tools for critical financial systems, as well as to identify priority areas for further research. The main directions of scientific novelty of this research are: 1. A detailed review of promising approaches based on artificial intelligence and machine learning technologies to ensure effective protection of CII organizations from modern complex cyber attacks. 2. Identification and analysis of a number of key scientific and technical problems that need to be solved to increase reliability, interpretability and trust in intelligent cybersecurity systems, including issues of robustness to attacks, active online learning, federated and differential private data processing. 3. Identification of promising areas for further research and development in the field of application of specialized methods of secure and trusted AI to protect critical financial infrastructure. Thus, this research makes a significant contribution to the development of scientific and methodological apparatus and practical solutions for the use of intelligent methods to ensure cybersecurity.


Keywords:

Critical information infrastructure, information security, artificial intelligence, machine learning, deep learning, cybersecurity, cyber attacks, neural networks, intelligent methods, DDoS attacks

This article is automatically translated.

The article was prepared based on the results of research carried out at the expense of budgetary funds under the state assignment of the Financial University

Introduction

The critical information infrastructure (CII) of the financial sector plays a key role in ensuring the sustainable functioning of economic systems and the financial stability of States. Failures or disruptions of banking systems, payment services, exchanges and other financial organizations can have catastrophic consequences for the well-being of society. At the same time, the growing digitalization of the financial industry and the widespread introduction of innovative technologies are opening up new attack vectors for attackers.

Modern cyber attacks are becoming more sophisticated, using social engineering techniques, exploiting unknown zero-day vulnerabilities, distributed botnets and tools for automatic network scanning. Traditional defenses based on signature analysis and rules often prove ineffective against new, previously unknown threats. There is an urgent need for more flexible and intelligent cybersecurity systems.

The methods of artificial intelligence, machine learning and data mining open up fundamentally new opportunities to protect the critical infrastructure of the financial sector. These technologies allow you to automatically detect anomalies and signs of attacks in large volumes of heterogeneous data, identify complex patterns, build predictive models of cyber attacks and dynamically adapt to constantly changing threats.

Intelligent methods are capable of processing various types of cyber data - network traffic, event logs, hardware telemetry, SIEM system data, information from open sources and much more. Advanced deep learning technologies are used, including convolutional and recurrent neural networks, transformers, generative adversarial networks, clustering and visual analysis methods. This allows you to create comprehensive next-generation solutions for intrusion detection, intelligent monitoring and active protection of critical systems.

The relevance of research in the field of application of intellectual methods of financial sector CII protection is due to several key factors:

1. The ever-increasing complexity and dynamism of the cybersecurity environment, which requires adaptive intelligent solutions.

2. The need for effective analysis of large volumes of heterogeneous data for timely detection of anomalies.

3. Ensuring the protection of critical systems in the face of an increasing number of potentially possible attack vectors.

4. The ability to predict and prevent cyber attacks at an early stage using big data analysis.

5. The need for interpretable and explicable solutions in the field of financial sector cybersecurity.

Thus, the research and development of intelligent methods for protecting the critical infrastructure of the financial sector is an important interdisciplinary task of strategic importance for ensuring national cybersecurity and the economic well-being of the state.

Research materials and methods

The methodological basis of this study is a comprehensive analysis of the scientific literature on the use of intelligent methods and technologies to protect critical information infrastructure. Let's look at how this topic is presented in various studies by both domestic and foreign authors.

The article by Gorbatov V.S. and co-authors [1] focuses on the need for multilevel protection of the CII network perimeter for effective cybersecurity. In it, the authors analyze the issues of cybersecurity of the network perimeter of critical information infrastructure (CII) facilities. The main threats and vulnerabilities, according to the authors, are related to malware, software vulnerabilities, DDoS attacks and social engineering. They offer a comprehensive approach that includes firewalls, network segmentation, encryption, software updates and staff training. The article also highlights the importance of stakeholder engagement and continuous improvement of protection methods to counter cyber threats.

In the work of Zuev V.N. [2], the application of deep learning methods for detecting anomalies in network traffic is investigated. The author notes that traditional approaches based on rules and signatures often prove ineffective against new and sophisticated attacks. Alternatively, it is proposed to use a convolutional neural network (CNN) to analyze network traffic in real time. Traffic data is presented in the form of two-dimensional images, which allows CNN to identify significant signs and patterns. Experiments conducted on public datasets have shown that the proposed method provides a higher accuracy of anomaly detection compared to traditional methods. Despite a number of limitations related to the need for large amounts of data for training and the possibility of bypassing the system, the author notes the prospects of using deep learning methods in cybersecurity tasks and the need for further research in this area.

A.M. Vulfin in his article [3] leads the development of an integrated approach to assessing the security risks of critical information infrastructure (CII) facilities using data mining methods. The author notes the importance of timely and accurate risk assessment to ensure the continuity of the CII and prevent incidents. The paper presents a new approach based on the integration and analysis of data on security events, vulnerabilities and incidents accumulated during the operation of the CII. A set of models and methods is proposed, including the representation and integration of heterogeneous data, knowledge extraction using machine learning and deep learning, risk assessment taking into account the interrelationships of factors, as well as visualization and interpretation of results. An experimental assessment based on real data has shown that the developed approach provides a more accurate and complete risk assessment compared to traditional methods. The author notes the prospects for further research, including integration with risk management systems and the development of data security methods used for risk assessment.

The research of Erokhin S. D., Petukhov A. N. [4] is developing a new approach to security management of critical information infrastructures (CII) based on the concept of asymptotic management. The authors note the complexity and dynamism of the CII functioning environment, as well as the need to ensure their continuous operation. Existing approaches to security management are often not flexible enough and cannot adapt to rapidly changing conditions. The proposed approach assumes constant adaptation and improvement of the CII security system in accordance with changes in the environment and new threats. It is based on continuous monitoring and analysis of data on security events, vulnerabilities, incidents and other risk factors using data mining technologies. The authors present the architecture of asymptotic CII security management, which includes components for data collection and integration, intelligent analysis, risk assessment and decision-making, as well as visualization and interpretation of results. The advantages of the proposed approach are discussed, such as the flexibility and adaptability of the security system, as well as its limitations associated with high computational requirements and the need for high-quality source data.

The article [5] analyzes the problem of anomaly detection in cyber-physical systems, especially in the context of critical infrastructure protection. The author of Vegesna V. V. notes that traditional methods of anomaly detection based on rules and thresholds often prove ineffective for complex cyberphysical systems with a large volume of heterogeneous data. As an alternative, it is proposed to use machine learning approaches. The paper analyzes various methods: deep neural networks for time series analysis, isolation forest methods for detecting outliers, probabilistic models and hybrid approaches. For each method, the principle of operation, advantages and limitations are described in relation to the task of detecting anomalies in cyber-physical systems of critical infrastructure. As an example, the power grid management system is being investigated. An experimental comparison of various machine learning methods based on real data has been carried out. The results show that a hybrid approach combining several techniques is the most effective. The author concludes that it is promising to use machine learning methods to protect critical infrastructure and detect anomalies in cyberphysical systems. The directions of further research are noted, including the development of specialized methods for specific systems and the improvement of data processing methods and ensuring the security of machine learning systems themselves.

The article by Selim G. E. I. et al., presented in the journal "Multimedia Tools and Applications" [6], is aimed at investigating the application of machine learning methods for detecting and classifying anomalies in the critical industrial infrastructure of the Internet of Things (IIoT). The authors note the growing importance of ensuring the security of IIoT systems that control vital processes. Traditional intrusion detection systems often turn out to be ineffective for IIoT due to the large volume and heterogeneity of data, complex network topology, and the need for real-time operation. As a solution, it is proposed to use machine learning algorithms to analyze IIoT data, identify abnormal events and classify them by type. The paper considers three main algorithms: a random forest, a support vector machine, and a multilayer perceptron. The mathematical foundations, advantages and disadvantages of each method are described. To assess the effectiveness, experiments were conducted on a data set containing real performance indicators of the IIoT control system of a gas processing plant. The results show that the random forest algorithm demonstrated the best accuracy of classification of abnormal events, reaching 96%, with a relatively short training time. The authors discuss possible ways to further improve performance and emphasize the importance of protecting critical IIoT infrastructure using machine learning methods. The need for further research using more extensive sets of real data and taking into account the specifics of various industrial sectors is noted.

In the article Pinto A. et al. [7] provides an overview of modern intrusion detection methods and systems based on machine learning technologies to protect critical infrastructure. The review presented in the article begins by identifying the main security threats to critical infrastructure systems and the disadvantages of traditional intrusion detection methods. The authors then cite the advantages and limitations of various types of intrusion detection systems (IDS) - network, node, and hybrid. The main part of the article is an overview of machine learning methods used in modern IDS: teaching methods with a teacher, without a teacher, hybrid and deep learning methods. For each category, the principles of the algorithms, their strengths and weaknesses in intrusion detection tasks are described in detail. Special attention in the article [7] is paid to the issues of preprocessing and selection of features for working with data on network traffic and security event logs. The most promising methods and open problems are discussed, such as large amounts of data, the problem of conceptual drift and the need to adapt to the specifics of various critical infrastructures. The authors of Pinto A. and others . As a result, the research concludes that deep learning methods, especially convolutional and recurrent neural networks, demonstrate the highest accuracy in detecting cyber attacks and anomalies, but large sets of high-quality data are required for their successful application.

A system for detecting and analyzing cyber threats aimed at critical infrastructure using machine learning methods is presented in the study by Aragonés Lozano M., Pérez Llopis I., Esteve Domingo M. [8]. The approach is based on the concept of "threat hunting", which involves an active search for signs of malicious activity. The authors [8] note that critical infrastructure is an attractive target for intruders, and traditional intrusion detection systems are often ineffective against modern cyber threats. The system proposed in the article consists of modules for data collection, preprocessing, machine learning and visualization of results. To detect anomalies and classify threats, the authors use random forest, isolating forest and single-class SVM algorithms. The system has been tested on real data from the telecommunications infrastructure of a large operator. The experimental results show the high efficiency of the machine learning ensemble in the tasks of classifying various types of cyber threats, including the detection of new, previously unknown attacks. Aragonés Lozano M. et al. discuss the advantages of the proposed approach, such as its ability to generalize and adapt to changing threats, as well as the limitations associated with the need to constantly update the system as new types of attacks appear. The prospects of machine learning methods for effectively countering cyber threats to critical infrastructure are noted.

A comprehensive review of the problems of cybersecurity of critical infrastructures in the context of modern threats, including those related to the use of artificial intelligence (AI) technologies is given in the article Raval K. J. et al. [9]. The authors analyze the most common cyber attacks that pose a threat to critical infrastructure systems, such as denial of service attacks, malware, attacks on industrial control systems, cyber espionage, etc. Special attention is paid to new threats related to the development of AI. The considered approaches to ensuring cybersecurity of critical infrastructures using AI and machine learning methods, the analyzed tasks of detecting intrusions and anomalies, identifying vulnerabilities, responding to incidents, protecting industrial control systems and AI itself from attacks confirm the need to study CII threats. The numerous examples given from the literature on the use of AI for security in various sectors indicate the relevance of the study. The article presents unresolved problems and future directions of research, including ensuring the interpretability and explainability of AI solutions, the development of machine learning methods in the context of counter-learning, the development of standards and methodologies for risk assessment, as well as solving problems of confidentiality and ethics of AI applications.

In the study [10], Alqudhaibi A. et al. propose a slightly different proactive approach to predicting cyber threats to protect critical infrastructure in the context of Industry 4.0. The key idea is to analyze the motivations and behavioral models of potential attackers based on machine learning methods. The authors confirm in their study that forecasting future cyber attacks can be improved by understanding the motivations and characteristics of those who commit them. The developed system consists of modules for collecting and enriching data on cyber incidents, analyzing attackers' motivations, predicting threats and assessing risks. Machine learning algorithms such as random forest and neural networks trained on historical data about attacks and motivations are used for forecasting. The results of the experimental evaluation showed that the proposed approach demonstrates a higher accuracy of threat prediction compared to traditional methods, reaching about 85-90% accuracy for different types of attacks. In conclusion, the authors discuss the limitations of the work, such as the need for constant data updates and possible inaccuracies in determining the real motivations of the attackers. It is also noted the importance of observing ethical standards in the collection and use of data.

Bochkov M. V., Vasinev D. A. in their study [11] propose a new approach to modeling and assessing the stability of critical information infrastructure (CII) using the mathematical apparatus of hierarchical hypernetworks and Petri nets. The stability of the CII is extremely important, since many critical systems depend on its smooth operation. The authors note that traditional modeling methods often do not take into account the complex hierarchical structure of the CII and the relationship between its components. The proposed approach is based on a combination of two mathematical models: hierarchical hypernetworks to represent the structure of the CII and Petri nets to simulate the dynamics of its functioning, including possible failures, attacks and recovery processes. The use of hierarchical hypernets allows us to take into account the complex topology of CII, and Petri nets are used to describe the behavior of components in various situations. Based on the developed model, the authors propose metrics for assessing the resilience of CII, such as the probability of failure, average uptime and average recovery time. The results of modeling the sustainability of CII using the example of energy and telecommunications infrastructure show that the proposed approach allows for a more accurate assessment of vulnerabilities and resilience compared to traditional methods. In conclusion, Bochkov M. V., Vasinev D. A. discuss the limitations of the work associated with the need to specify a large number of model parameters and the complexity of calculations for large-scale infrastructures, nevertheless, the prospect of using this approach for modeling other types of complex systems is noted.

In the study of Petrov A.D., Kharchenko E. A. [12], the problem of timely detection of abnormal server conditions is considered, which is an important task to ensure the reliability and security of information systems. The main idea of the method proposed by the authors is to use morphological analysis to identify anomalies in multimodal time series characterizing the operation of the server. The proposed methodology includes the following main stages: collecting and preprocessing server monitoring data; converting data into a multimodal time series of feature vectors; using morphological analysis to identify structural anomalies; detecting abnormal states based on structural anomaly coefficients; visualization and further analysis of identified anomalies. The key element of the method is the application of morphological operations sensitive to local and global changes in the shape of time series curves. An experimental assessment based on real monitoring data of computing nodes has shown the effectiveness of the proposed approach in detecting both sudden and long-term anomalies. In conclusion, the advantages of the morphological approach are discussed, such as the absence of the need for manual identification of features and the ability to detect anomalies of various nature, as well as the limitations and prospects for further development of the method.

Tsibizova T. Yu., Panilov P. A., Kocheshkov M. A. [13] propose an approach to monitoring the security of critical information infrastructures based on cognitive modeling, which consists in the following:

- building a cognitive model that includes concepts (threats, vulnerabilities, protection measures) and cause-and-effect relationships between them;

- determination of relationship weights based on expert assessments or training;

- modeling the dynamics of the system and analyzing the results obtained to identify critical threats and vulnerabilities;

- monitoring changes in the security status over time and making management decisions.

Advantages of the approach: the ability to take into account vague and incomplete knowledge, flexibility of the model, visual presentation. Limitations: the subjectivity of determining weights, the need for expert knowledge. The authors emphasize the prospects of using cognitive modeling to ensure the safety of critical infrastructures and suggest directions for further research.

The problem of ensuring the protection of critical information infrastructures from distributed denial of service (DDoS) attacks is analyzed in the article by Voevodin V. A. et al. [14]. The authors propose a methodology for assessing the security of automated control systems (ACS) of critical infrastructure from DDoS attacks based on simulation modeling using the Monte Carlo method.

Voevodin V. A. and co-authors identify the following main stages of the methodology:

1. Formation of an automated control system model and a security threat model in the form of a queuing network.

2. Determination of model parameters based on expert assessments and statistical data.

3. Development of a simulation model of automated control system functioning under DDoS attacks.

4. Conducting multiple runs of the simulation model with different initial conditions.

5. Collection and statistical processing of simulation results.

6. Assessment of the security of the automated control system based on the data obtained.

7. Development of recommendations for improving security.

The authors tested the methodology using a model example of an automated control system of an energy facility, nevertheless, the advantages of using simulation modeling are noted, such as the possibility of taking into account a large number of factors and obtaining statistically reliable estimates in other areas of economics and industry.

The article by A. S. Lyubukhin [15] examines methods of analysis and assessment of information security risks based on the fuzzy logic apparatus. The author notes that traditional methods often rely on subjective expert assessments containing uncertainties.

The proposed methodology includes the following main steps:

1. Defining a set of linguistic variables to describe risks.

2. Formation of a base of fuzzy rules for risk assessment based on expert knowledge.

3. Selection of membership functions for fuzzy variables.

4. Fuzzification of input variables.

5. Aggregation of fuzzy rules using fuzzy logic operations.

6. Defuzzification of the output variable (determination of a specific risk value).

7. Ranking and decision-making on risk management.

The paper provides an example of the application of a methodology for assessing the risks associated with computer viruses. The advantages of the fuzzy logic approach are noted, such as the ability to work with uncertainty and the use of qualitative linguistic assessments. The approach proposed in the study emphasizes the prospects of using fuzzy logic methods for risk analysis in conditions of uncertainty characteristic of the field of information security.

Article by Zhang Y. et al. [16] is devoted to the problem of predicting connections in knowledge graphs that combine information security requirements and data on cybersecurity threats. The authors propose a new method for predicting connections based on the dissemination of information along the edges of the graph (Edge Propagation).

The authors have identified the main components of the proposed approach, among which the encoding of nodes and edges of the graph into vector representations is highlighted, iterative transfer of information between vector representations of nodes through intermediate edges, formation of vector representations of pairs of nodes taking into account their common connections at different distances, training of a classifier (logistic regression) based on these vector representations to predict the presence of a connection.

Experiments on real and synthetic datasets have shown that the proposed method is superior to other approaches to predicting connections in knowledge graphs. The use of vector representations for encoding semantics, taking into account the structure of connections at different levels, the ability to work with incomplete data and heterogeneous graphs, interpretability through the use of explicit pairs of nodes, proposed by the researchers, confirm the key advantages of the method. In conclusion, potential areas for further research are discussed, including the extension of the method to account for dynamics and integration with decision-making systems.

The use of graph theory for visual analysis and modeling of cyber attacks is also given in the article Rabzelj M., Bohak C., Južnič L. Š., Kos A. and Sedlar U. [17]. The authors propose an approach based on creating a graph model in which nodes represent objects (vulnerabilities, attack vectors, malicious actions), and edges reflect possible transitions between them during the attack.

The article suggests the key components of the approach:

1. A cyberattack graph model with different types of nodes and weighted edges.

2. Algorithms for automatic graph construction based on input data.

3. Graph analysis methods such as shortest path search, critical node detection, clustering.

4. Visualization of the cyberattack graph using various representations.

5. Interface for cybersecurity analysts.

Experiments with real data have shown the applicability of the approach for risk assessment, planning protective measures and analyzing the consequences of cyber attacks. The main advantages of the graph model are Rabzelj M., Bohak C., Južnič L. Š., Kos A. and Sedlar U. They include:

- visual visualization of complex processes and relationships;

- the ability to identify critical components and vulnerabilities;

- support for "what-if" scenario analysis and modeling;

- Integration of heterogeneous threat data from various sources.

These advantages allow us to conclude that further research in this area is promising, including integration with intrusion detection systems and the use of machine learning methods.

The article by S. L. Larionov [18] is devoted to the urgent problem of countering fraud in the field of online financial services. The author analyzes the main types of fraudulent activity and offers a set of mechanisms for their prevention, detection and response. Key elements of the approach include strict user identification, the use of scoring models and machine learning algorithms, as well as means to block fraudulent transactions. Special attention is paid to improving the overall level of cybersecurity and protecting confidential customer data based on a balanced approach, which allows us not to complicate the processes of providing financial services to bona fide users.

Berardi D. et al. [19] proposes an approach to network security issues with strict requirements for time characteristics (Time Sensitive Networking, TSN), in particular, the security problems of the Precision Time Synchronization Protocol (PTP). The researchers are analyzing the main security threats to PTP, including synchronization leader substitution, man-in-the-middle attacks, distributed attacks and denial of service. A taxonomy of these threats at different levels of architecture is proposed. A prototype of a secure PTP implementation using cryptographic protocols and certificate-based authentication mechanisms has been developed. The experimental assessment showed an acceptable level of overhead costs for security. Berardi D. et al. note the critical importance of ensuring the security of PTP and other TSN components to protect industrial control systems from malicious influences. Further research areas include the integration of the proposed protection mechanisms into existing systems and the development of methods to counter distributed time synchronization attacks.

The authors Kim T, Pak W. [20] explore the application of deep learning methods based on transformers for the task of detecting network Intrusion Detection (NID). The article proposes a new approach in which the input network data is converted into images and processed by a set of parallel transformer networks. Each transformer specializes in a specific type of attacks or anomalies in network traffic. The outputs from the parallel transformers are then combined using fully connected layers before obtaining the final classification. The experiments conducted on the NSL-KDD and CSE-CIC-IDS2018 datasets demonstrate the superiority of the proposed method over a number of existing approaches to NID based on deep learning. Key advantages include high detection efficiency of various types of attacks, scalability and the ability to interpret model solutions. The authors also note the limitations of the approach associated with high computational requirements and the need for pre-filtering of data.

The analysis shows the need to address issues related to the search for modern intelligent methods of protecting critical information infrastructure.

The following conclusions can be drawn from a comprehensive analysis of the scientific literature:

1. The application of machine learning and artificial intelligence methods is a key area for detecting anomalies, detecting cyber attacks and ensuring the protection of critical infrastructure in the financial sector.

2. Various deep learning technologies are widely used, such as convolutional neural networks, recurrent networks, autoencoders and transformers. These approaches demonstrate high efficiency in the tasks of anomaly detection and classification of cyber attacks.

3. To improve the accuracy and efficiency of intrusion detection systems, ensemble methods are used that combine several machine learning models specialized in different types of attacks or security aspects.

4. Methods are proposed for converting various types of data (network traffic, logs, device data) into formats suitable for analysis using computer vision and image processing methods.

5. Architectures and methods of asymptotic security management of critical infrastructure are being developed to ensure the adaptability and self-tuning of protection systems in a dynamically changing threat environment.

6. For a comprehensive assessment of the security risks of critical facilities, fuzzy logic, cognitive modeling and hierarchical hypernetwork methods are used to take into account uncertainty and the relationship between various risk factors.

7. Important attention is paid to the visualization and interpretability of machine learning models used to protect critical infrastructure, for example, through visual analysis of cyberattack graphs.

8. Proactive approaches to cybersecurity are being investigated, based on the analysis of the motives of violators and the prediction of possible threats using artificial intelligence methods.

9. Specialized solutions are being developed to ensure the security of online financial services, including anti-fraud mechanisms based on scoring models, anomaly detection and the use of biometric technologies.

10. The issues of ensuring the security of the industrial Internet of Things and critical infrastructure components within the framework of the Industry 4.0 concept are analyzed.

11. Methods of protecting time synchronization protocols and other network infrastructure components with strict requirements for time characteristics using cryptographic mechanisms are being investigated.

In general, an analysis of the literature has shown that intelligent methods based on machine learning and artificial intelligence are key technologies for ensuring effective protection of critical infrastructure of the financial sector from modern cyber attacks and security threats.

However, despite significant achievements in the application of intelligent machine learning and artificial intelligence methods to protect the critical infrastructure of the financial sector, there are a number of problems and unresolved tasks that require further research and development, among which the main ones can be distinguished:

1. Ensuring the interpretability and explainability of AI models used to make critical decisions in the field of cybersecurity. It is necessary to develop methods of visualization, analysis of neural network activations and extraction of interpreted rules from the "black boxes" of deep learning.

2. Development of methods for assessing confidence and uncertainty in the conclusions of intelligent threat detection systems to increase reliability and make informed decisions about the response.

3. The study of the resistance of machine learning algorithms to attacks aimed at making mistakes in the decision-making process (adversarial attacks). Methods are needed to ensure robustness to such attacks.

4. Development of active and online learning methods for timely adaptation of security systems to changing conditions and new types of threats without the need for complete retraining.

5. Improving the approaches of federal and differential private education for combining heterogeneous confidential data from various subjects without disclosing private information.

6. Research of methods for integrating expert knowledge into machine learning models by hybridization with logical and production rules to increase interpretability and efficiency.

7. Development of methods for automatic extraction of features and representations based on unsupervised learning in relation to cybersecurity tasks.

8. Development of methods for planning coordinated actions of active protection systems for critical infrastructure based on situational analysis and predictive models of attackers.

9. Creation of scalable platforms for testing, verification and comparison of various intelligent security methods using realistic datasets and attack scenarios.

10. Research on the integration of specialized methods of secure and trusted AI with traditional information security technologies to build integrated cybersecurity systems.

Experimental verification

Let's test these conclusions experimentally. Based on the methods of machine learning or deep machine learning, the authors conducted a study that allows them to identify fraud in the financial sector.

The "Credit Card Fraud Detection" set was used as a dataset (https://www.kaggle.com/datasets/mlg-ulb/creditcardfraud ).

This dataset contains information about the transactions of European credit card holders. It presents 284,807 transactions, of which 492 were fraudulent, which is only 0.172%. The data includes numerical variables obtained using the principal component method. The variables "Time" and "Amount" have not been converted. The "Time" variable reflects the time in seconds between each transaction and the first, and the "Amount" variable represents the amount of the transaction. The target variable "Class" takes the value 1 in case of fraud and 0 otherwise.

If we solve this binary classification problem head-on, using any of the machine learning algorithms, without data analysis and using Accuracy calculated by formula (1) for the quality assessment metric, then we will get a very high result, at the level of 99%, even if the algorithm does not guess anything in class "1".

where TP is true positive, the number of correctly predicted objects of the target class;

TN – true negative, the number of correctly predicted objects of class zero;

FP – false positive, the number of incorrectly predicted objects of the target class;

FN – false negative, the number of incorrectly predicted objects of class zero;

Therefore, data analysis is really necessary, the choice of class balancing methods (or evidence that it is not needed), an informed choice of learning evaluation metrics, analysis of machine and deep learning methods.

As part of the study, more than 20 machine learning and deep learning models were tested in six different scenarios:

1. Without preprocessing emissions and balancing classes.

2. With preprocessing of emissions without balancing classes.

3. Without preprocessing emissions with balancing through class weighing.

4. With preprocessing of emissions and balancing through class weighting.

5. Without preprocessing emissions using SMOTE for balancing.

6. With preprocessing of emissions using SMOTE for balancing.

The main metric for evaluating models will be ROC-AUC, as it most accurately reflects information about the True Positive Rate (TPR) and False Positive Rate (FPR), depending on the decision threshold. When converting a real output to a binary exponent, it is necessary to set a threshold at which the value 0 turns into 1; although a threshold of 0.5 is usually used, it may not be optimal in the case of unbalanced classes. For an overall assessment of models that does not depend on a specific threshold, AUC-ROC is used, which is the area under the curve reflecting the ratio of TPR (2) and FPR (3):

TPR reflects completeness, while FPR indicates the proportion of objects of a negative class that are mistakenly classified by the algorithm. In the case of an ideal classifier, AUC-ROC is equal to 1 (FPR = 0, TPR = 1). If the classifier produces random predictions, AUC-ROC tends to 0.5 when TP and FP are equal. Each point on the graph corresponds to a certain threshold, and the area under the curve serves as an indicator of the quality of the algorithm: the larger the area, the higher the quality. The steepness of the curve is also important, as we strive to maximize TPR and minimize FPR so that the curve approaches the point (0,1). The example in Figure 1 demonstrates that the current model lags behind the ideal, which is represented by an isosceles triangle resting on a diagonal. In addition, it is observed that errors on Class 0 can improve the definition of Class 1. The balance between TPR and FPR should be determined individually for each specific case.

Figure 1 - ROC-AUC

Balancing the TPR and FPR metrics is a difficult task. Lowering the threshold for identifying the target class "1" may lead to blocking a significant number of non-border transactions, which will reduce customer loyalty. At the same time, minimizing blockages can increase the risk of fraud and cause dissatisfaction. Therefore, the decision threshold should be determined depending on the needs of the bank.

All models will be trained according to the same principle: transformation methods will be adapted to the training sample, and exclusively applied to the validation sample. Each model is formed as a pipeline of transformations, observing a balanced division of samples. For example, if the target class in the initial set is 0.17%, a similar ratio will be observed for both the training and validation sets. Initial transformations include scaling using StandartScaler and RobustScaler.

A training graph will be built for each model, reflecting the dependence of the results on the sample size, indicating the confidence interval for the ROC-AUC metric. The learning curve illustrates the impact of the training sample size on model performance and cross-validation, allowing you to answer two main questions:

1. How does the performance of the model change with increasing data volume?

2. How sensitive is the model to errors due to variance compared to errors due to bias?

In addition, a grid search will be applied for each model to optimize hyperparameters, and training will be conducted only after selecting the best hyperparameter values.

Table 1 shows the results of the tested models.

Table 1 - ROC-AUC metric for models with different sets

Models

ROC-AUC

A set without transformations

A set using SMOTE

A set with class weighting

LogisticRegression

with emissions

0.972423

0.972235

0.973915

LogisticRegression

no emissions

0.976971

0.975332

0.973418

SVC

with emissions

0.964920

0.979727

0.977112

SVC

no emissions

0.962544

0.976860

0.969799

KNeighborsClassifier

with emissions

0.953505

0.957686

-

KNeighborsClassifier

no emissions

0.951753

0.957471

-

DecisionTreeClassifier

with emissions

0.918165

0.90702

0.887544

DecisionTreeClassifier

no emissions

0.925287

0.924379

0.919717

StackingClassifier

with emissions

0.943349

0.952068

0.972654

StackingClassifier

no emissions

0.953258

0.949473

0.977153

BaggingClassifier

with emissions

0.972654

0.972139

-

BaggingClassifier

no emissions

0.977153

0.975305

-

AdaboostClassifier

with emissions

0.918156

0.952031

-

AdaboostClassifier

no emissions

0.925261

0.954966

-

GradientBoosting

with emissions

0.927943

0.972651

-

GradientBoosting

no emissions

0.914239

0.981783

-

NeuralNetwork

with emissions

0.977698

0.972294

0.972206

NeuralNetwork

no emissions

0.980729

0.974456

0.976503

The KNeighborsClassifier model does not support class weighting, while the best results were demonstrated by the SMOTE method, which turned out to be practically independent of the presence of outliers, although the optimal value was obtained with outliers. With the BaggingClassifier, AdaBoostClassifier and GradientBoosting models, it is also not possible to use sets with class weights. The ROC-AUC metric for BaggingClassifier showed better results on a dataset without any transformations. At the same time, the ROC-AUC metrics for AdaBoostClassifier and GradientBoosting turned out to be higher when using SMOTE. The best model was determined based on the ROC-AUC metric, and first the optimal model was chosen among machine learning methods, and then the best approach for a neural network. As a result, 5 machine learning models with the best indicators were selected.:

- (37) GradientBoostingClassifier, trained on a SMOTE kit, cleaned of emissions;

- (8) SVC trained on SMOTE kit, not cleaned from emissions;

- (27) BaggingClassifier trained on the original set, cleaned of emissions;

- (10) SVC trained on the original class-weighted set, not cleared of outliers;

- (1) LogisticRegression trained on the original data set, cleared of outliers.

A graph was plotted for these models in Figure 2.

Figure 2 – ROC curve

Depending on what is more important:

- identify all fraudulent transactions;

- identify as many fraudulent transactions as possible, but minimally misidentify non-fraudulent as fraudulent.

There are two favorites:

- in the first case, it is GradientBoostingClassifier 37;

- in the second case, it is BaggingClassifier 27 or SVC 8.

Let's determine that, after all, a large number of erroneous identification of transactions as fraudulent is more harmful than it helps. So let's still choose BaggingClassifier 27 as the best model.

The following data processing options have demonstrated the best results for a neural network:

- original dataset with deleted outliers;

- original dataset with stored outliers;

- application of class weighting with purified emissions.

The curve is shown in Figure 3.

Figure 3 – ROC curve for nn

The Model 39 performed better than others.

Figure 4 shows the final comparison of the best machine learning and deep learning models.

Figure 4 – ROC curve for nn and BaggingClassifier

Obviously, the neural network model shows the best results. Therefore, the neural network 39, trained on the original dataset from which outliers were removed, will be chosen as the main model.

According to the results of the examination, it was established that the main scientific and practical results obtained during the performance of research work can be published in open printed and electronic scientific publications.

Results and discussion

Thus, the conducted experiment really proves the problems highlighted by the authors, described above. Indeed, making critical decisions in the field of cybersecurity requires the explicability of AI models and ensuring their interpretability. Methods for assessing the confidence and uncertainty of the conclusions of intelligent systems is an important component in the development of algorithms for the protection of CII. It is necessary to explore methods for integrating expert knowledge into machine learning models by hybridizing with logical and production rules to increase the interpretability and effectiveness of models. The task of creating scalable platforms for testing, verification and comparison of various intelligent security methods using realistic datasets and attack scenarios remains urgent. The integration of specialized methods of secure and trusted artificial intelligence with traditional information security technologies is essential for building holistic cybersecurity systems.

The conducted research makes it possible to highlight the need to use intelligent methods to protect the critical information infrastructure of the financial sector to solve the identified main problems:

1. Ensuring security by developing methods and technologies that increase the level of security of financial systems from cyber threats and other risks. The solution to this problem may include both technical solutions (for example, intrusion detection and prevention systems, data encryption) and organizational measures (information security policies, incident response plans).

2. Creating reliable systems that can function effectively in conditions of various threats and workload, which is necessary to maintain the stability of the financial sector. First of all, these are intelligent protection systems, which implies the development of fault-tolerant architectures, self-healing algorithms and adaptation to changing conditions.

3. Development of artificial intelligence systems that are understandable and understandable for users and specialists, as regards both the operation of algorithms and the decision-making process. Interpreted models can be used in systems, and decision-making auditing can be provided by AI systems.

4. Increasing the level of trust in artificial intelligence systems on the part of users and regulators. The financial industry serves the vital interests of citizens, businesses and the state. The introduction of AI technologies in this area requires a high level of trust on the part of users and regulators, since errors or failures in the operation of such systems can lead to serious financial and reputational losses.

The following measures can be taken to increase confidence in AI systems in the financial sector:

- ensuring transparency and explainability of the work of AI systems. Disclosure to users and regulators of the AI decision-making logic, algorithms used and data sources;

- implementation of mechanisms for auditing and monitoring the operation of AI systems by independent experts, which will allow confirming the correctness of their functioning;

- development and implementation of strict ethical standards and principles for the use of AI in the financial sector, this will create confidence that the systems will be used responsibly and in the interests of users;

- active involvement of users and regulators in the process of developing and implementing AI technologies.

The positive effects of increasing confidence in AI systems in finance will be aimed at:

- expanding the scope of AI technologies, increasing their penetration into critical areas;

- improving the quality and reliability of financial services, reducing risks;

- improving the competitiveness of financial institutions using AI;

- increased investments in the development of AI technologies for the financial sector.

Negative effects will result in:

- additional costs for financial organizations to ensure transparency and control over AI systems;

- possible difficulties in achieving full transparency and explainability of complex AI systems;

- the risk of users' distrust of innovations in the financial sector, conservatism.

In general, increasing confidence in AI in finance through comprehensive measures is an important and promising area that can bring significant benefits, but requires careful study.

At the same time, the relevance is confirmed by the identification of ways to further research directions, among which one can distinguish:

- conducting a comprehensive analysis of threats and risks specific to the critical information infrastructure of the financial sector in order to identify priority areas for the development of protection methods;

- research of promising artificial intelligence technologies (machine learning, deep learning, neural networks) and their application to ensure the security of financial systems;

- development of methodologies for assessing the reliability and fault tolerance of intelligent protection systems for critical information infrastructure;

- study of approaches to ensuring transparency and explainability of the work of AI systems in the field of cybersecurity of financial organizations.

Thus, despite significant progress, the protection of critical infrastructure of the financial sector using intelligent methods remains an active interdisciplinary field of research that requires solving fundamental problems of ensuring security, reliability, interpretability and trust in artificial intelligence systems, which indicates the relevance of this fundamental research.

References
1. Gorbatov, V.S., Pisarev, A.V., Tsivilev, A.V., & Gryaznov, D.V. (2022). Cybersecurity of the Network Perimeter of a Critical Information Infrastructure Object. Information Security, 29(4), 12-26.
2. Zuev, V.N. (2021). Detection of Network Traffic Anomalies Using Deep Learning. Software Products and Systems, 34(1), 91-97.
3. Vulfin, A.M. (2023). Models and Methods for Comprehensive Risk Assessment of Critical Information Infrastructure Objects Based on Intelligent Data Analysis. Systems Engineering and Information Technologies, 4(13), 50-76.
4. Erokhin, S.D., & Petukhov, A.N. (2022). Architecture of Asymptotic Security Control of Critical Information Infrastructures. DSPA: Issues of Digital Signal Processing Application, 12(1), 18-30.
5. Vegesna, V.V. (2024). Machine Learning Approaches for Anomaly Detection in Cyber-Physical Systems: A Case Study in Critical Infrastructure Protection. International Journal of Machine Learning and Artificial Intelligence, 5(5), 1-13.
6. Selim, G.E.I., Hashish, S.A., Elkhatib, Y.A., Elkilani, W.S., & Ouda, A.S. (2021). Anomaly events classification and detection system in critical industrial internet of things infrastructure using machine learning algorithms. Multimedia Tools and Applications (pp. 12619-12640).
7. Pinto, A., Thapa, C., Paiva, S., & Dixit, S. (2023). Survey on intrusion detection systems based on machine learning techniques for the protection of critical infrastructure. Sensors (pp. 2415).
8. Aragonés Lozano, M., Pérez Llopis, I., & Esteve Domingo, M. (2023). Threat hunting system for protecting critical infrastructures using a machine learning approach. Mathematics (pp. 3448).
9. Raval, K.J., Solanki, V.K., Sharma, S., & Srivastava, G. (2023). A survey on safeguarding critical infrastructures: Attacks, AI security, and future directions. International Journal of Critical Infrastructure Protection (pp. 100647).
10. Alqudhaibi, A., Alassafi, M.O., Alwakeel, S.S., & Alharbi, S.M. (2023). Predicting cybersecurity threats in critical infrastructure for industry 4.0: a proactive approach based on attacker motivations. Sensors (pp. 4539).
11. Bochkov, M.V., & Vasinev, D.A. (2024). Modeling the Resilience of Critical Information Infrastructure Based on Hierarchical Hypernets and Petri Nets. Cybersecurity Issues, 1, 59.
12. Petrov, A.D., & Kharchenko, E.A. (2023). Morphological Method for Detecting Abnormal Server States. Bulletin of SibSUTIS, 18(1), 3-15.
13. Tsibizova, T.Yu., Panilov, P.A., & Kocheshkov, M.A. (2023). Monitoring the Security of the Information Protection System of Critical Information Infrastructure Based on Cognitive Modeling. Izvestiya of Tula State University. Technical Sciences, 6, 33-41.
14. Voyevodin, V.A., Lapshina, I.V., Gringauz, D.L., & Zainullin, R.R. (2023). Methodology for Assessing the Security of an Automated Control System of Critical Information Infrastructure Against DDoS Attacks Based on Monte Carlo Simulation. Bulletin of the Dagestan State Technical University. Technical Sciences, 50(1), 62-74.
15. Lyubukhin, A.S. (2023). Methods of Information Security Risk Analysis: Fuzzy Logic. International Journal of Open Information Technologies, 11(2), 66-71.
16. Zhang, Y., Wang, Y., Xu, C., Xue, Y., & Xu, B. (2024). Edge propagation for link prediction in requirement-cyber threat intelligence knowledge graph. Information Sciences (pp. 119770).
17. Rabzelj, M., Bohak, C., Južnič, L.Š., Kos, A., & Sedlar, U. (2023). Cyberattack Graph Modeling for Visual Analytics. IEEE Access (pp. 86910-86944).
18. Larionova, S.L. (2023). Mechanisms for Countering Fraud in Online Financial Services Systems. Financial Markets and Banks, 3, 47-52.
19. Berardi, D. et al. (2023). Time sensitive networking security: issues of precision time protocol and its implementation. Cybersecurity (pp. 8).
20. Kim, T., & Pak, W. (2023). Deep Learning-Based Network Intrusion Detection Using Multiple Image Transformers. Applied Sciences (pp. 2754).

First Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

The reviewed article is devoted to the analysis of modern intelligent methods of protecting critical information infrastructure. The research methodology is based on a comprehensive analysis of the scientific literature on the use of intelligent methods and technologies to protect critical information infrastructure. The relevance of the work is due to the fact that the growing digitalization of the financial industry and the widespread introduction of innovative technologies open up new attack vectors for attackers on the critical information infrastructure of the financial sector, which plays a key role in ensuring the sustainable functioning of economic systems and financial stability of states. The scientific novelty of the reviewed research, according to the reviewer, consists in identifying problems and unresolved tasks that require further research and development in the field of intelligent methods for protecting critical information infrastructure. Structurally, the following sections are highlighted in the publication: Introduction, Research materials and methods, Research results and their discussion, Bibliography. Based on the results of the generalization of literary sources of information, the authors conclude that intelligent methods based on machine learning and artificial intelligence are key technologies to ensure effective protection of critical infrastructure of the financial sector from modern cyber attacks and security threats. Among the problems requiring further research, the authors in particular note the need to develop visualization methods, analyze neural network activations and extract interpreted rules from the "black boxes" of deep learning, develop methods for assessing confidence and uncertainty in the conclusions of intelligent threat detection systems, and others. The bibliographic list includes 20 sources – modern publications by domestic and foreign authors in Russian and English on the topic of the article, to which there are address links in the text confirming the existence of an appeal to opponents. Among the reserves for improving the publication, it should be noted that in the section "Material and research methods", monotonous phrases are used in the literature review, for example, "the article is devoted to ...", repeated 7 times, and it is difficult to trace any logical connection between the publications under consideration. In fact, the description of the sources is reduced to the presentation of its name and annotation – it is difficult to call such a style of presentation successful. Unfortunately, in addition to generalizing previously published works, other methods of scientific knowledge are not used in the article, there are no results of sample surveys or any quantitative estimates. There are also no specific developments aimed at solving the identified problems – the publication ends with a statement of unresolved issues, whereas usually at the end of research it is usually customary to provide information about what problems have been solved as a result of the work performed. The article reflects the results of the research conducted by the authors, corresponds to the direction of the journal "Security Issues", but the elements of scientific novelty and practical significance are not formulated in the publication.

Second Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

The subject of the study. Based on the title, the article should be devoted to the analysis of modern intelligent methods of protecting critical information infrastructure. Familiarization with the content showed that it does not contradict the stated topic. The research methodology is based both on the application of general scientific methods (analysis, synthesis, induction and deduction, graphical tools) and specific ones (conducting experimental verification using mathematical apparatus). A wide range of applied methods creates a positive impression of familiarization with a peer-reviewed scientific article. The relevance of the research on countering cyber attacks is beyond doubt, since this issue is currently one of the most important components of ensuring not only information security, but also national security. The constant actions of unfriendly countries directed against the domestic information infrastructure indicate that there are risks of information loss. Therefore, high-quality scientific research on this topic will be in demand among a wide readership: these issues are both in the focus of attention of public authorities of the Russian Federation, and in the focus of attention of specific organizations. At the same time, it is important to take into account that individual tools for countering threats should not be presented in detail in the open press due to the fact that representatives of unfriendly countries can familiarize themselves with them and come up with circumvention mechanisms. The author is recommended in the text to assess the possibility of publishing the results in the open press, including explaining the impossibility of presenting individual results of the scientific work carried out. The scientific novelty is contained in the material submitted for review. In particular, it may be related to the results of an experimental test. It is valuable that it was conducted on the basis of foreign data, which minimizes the risks of possible application of these results as part of countering the Russian practice of preventing threats to cybersecurity by representatives of unfriendly countries. Style, structure, content. The style of presentation is scientific. The structure of the article is built by the author, allows you to reveal the stated topic, however, it is necessary to add a section with recommendations for solving the identified problems and further directions of research. The potential readership expects to see in the text of the article not only a list of existing problems, but also specific reasoned measures to solve them. In particular, the author says that it is necessary to "Increase the level of trust in artificial intelligence systems on the part of users and regulators, which is key for their implementation in important areas such as finance." How did the author come to the conclusion that this is key? How exactly is it necessary to increase trust? What positive and negative effects can there be from the implementation of this proposal? Bibliography. The bibliographic list consists of 20 titles. It is valuable that it contains both domestic and foreign scientific publications. The presence of publications published in 2024 in the list of sources also creates a positive impression. Appeal to opponents. The author's text partially contains elements of an appeal to opponents. It is also recommended to discuss the results obtained in the form of problems and recommendations for their solution, showing what is the increase in scientific knowledge? Conclusions, the interest of the readership. Taking into account all the above, we conclude that the article requires revision, after which the question of the expediency of its publication can be resolved.

Third Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

The subject of the peer-reviewed research is the intellectual methods of protecting the critical information infrastructure (hereinafter referred to as CII) of the financial sector. The authors rightly attribute the high relevance of their research to the progressive digitalization of social processes in general, and the financial sector in particular, as well as to the extremely painful consequences for the well-being of modern societies of disruptions in the work of financial institutions. The "intellectualization" of CII protection methods observed in recent years, associated with the expansion of the use of artificial intelligence, machine learning and data mining methods, gives additional importance to the peer-reviewed study. As a basic research method, the authors declare a "comprehensive analysis of scientific literature", but they are clearly not limited to it. At a minimum, the results of the literature analysis were verified by an experimental method. In addition, the experiment itself was conducted using deep machine learning methods (as the authors themselves say below). Despite some conciseness of the presentation due to the research methodology chosen by the authors, the presence of experimental verification of the conclusions obtained from the analysis of scientific literature gives the reviewed article the character of a scientific work, rather than an ordinary summary. Accordingly, we can talk about the scientific novelty of the results obtained in the course of the study. First of all, it should be noted the general conclusion that intelligent methods are becoming key technologies today to ensure effective protection against modern cyber attacks and security threats. At the same time, the identified problems and unsolved problems in the field under study are also of scientific interest. Finally, the results of the experiment deserve the attention of the scientific community, during which the greater effectiveness of neural networks in solving problems of CII protection was established than traditional methods. Structurally, the reviewed article also does not cause significant complaints: its logic is consistent and reflects the main aspects of the conducted research. The following sections are highlighted in the text: - "Introduction", where a scientific task is set and its relevance is argued; - "Research materials and methods", which declares (but, unfortunately, does not substantiate) the methodological basis of the study, as well as analyzes the scientific literature on the topic; - "Experimental verification", where the conclusions obtained from the scientific literature are verified; - "Results and discussion", where the results of the study are summarized, conclusions are drawn and prospects for further research are outlined. The style of the article is scientific and analytical. The text is written quite competently, in good Russian, with the correct use of scientific terminology. The bibliography includes 20 titles, including sources in foreign languages, and adequately reflects the state of research on the subject of the article. An appeal to opponents takes place in the analysis of scientific literature on the subject of the article. The advantages of the article (in addition to experimental verification of the results obtained, which, unfortunately, is not so common) It can be attributed to the use of illustrative material, which significantly simplifies the perception of the authors' arguments. GENERAL CONCLUSION: the article proposed for review can be qualified as a scientific work that meets the basic requirements for works of this kind. The results obtained by the authors will be interesting for sociologists, economists, specialists in the field of information security, as well as for students of the listed specialties. The presented material corresponds to the topic of the magazine "Security Issues". According to the results of the review, the article is recommended for publication.