Telecommunication systems and computer networks
Reference:
Shabrova, A.S., Knyazev, M.A., Kolesnikov, A.V. (2025). Dynamic RACH-Slot Allocation for Collision Minimization in NB-IoT Networks Based on Reinforcement Learning Algorithms. Software systems and computational methods, 2, 1–11. https://doi.org/10.7256/2454-0714.2025.2.73848
Abstract:
The subject of this research is the adaptive management of access to Random Access Channels (RACH) in Narrowband Internet of Things (NB-IoT) networks, which frequently face congestion due to high device density and limited channel capacity. The study focuses on the practical application of Reinforcement Learning algorithms, specifically Q-learning and Deep Q-Network (DQN), to address this issue. The authors thoroughly examine the problem of RACH overload and the resulting collisions that cause delays in data transmission and increased energy consumption in connected devices. The article analyzes the limitations and inefficiency of traditional static slot allocation methods and justifies the necessity of implementing a dynamic, learning-based approach capable of adapting to constantly changing network conditions. The research aims to significantly minimize collision rates, improve connection success rates, and reduce the overall energy consumption of NB-IoT devices. The research methodology involved the use of advanced machine learning methods, including Q-learning and DQN, together with simulation modeling conducted in the NS-3 environment, integrating a dedicated RL-agent for dynamic and intelligent RACH slot allocation. The main conclusions of the study highlight the demonstrated effectiveness of the adaptive RL-based approach for optimizing access to communication slots in NB-IoT networks. The scientific novelty lies in the development and integration of a specialized RL-agent capable of dynamically managing slot distribution based on real-time network conditions. As a result of implementing the proposed approach, the number of collisions was reduced by 74%, the number of successful connections increased by 16%, and the energy efficiency of the devices improved by 15% in comparison with traditional static methods. These results clearly demonstrate the practical applicability, and scalability of adaptive RL-based management techniques for enhancing both the performance and reliability of real-world NB-IoT networks.
Keywords:
Internet of Things, IoT, RACH, reinforcement learning, NS-3, collisions, DQN, Q-learning, Reinforcement Learning, NB-IoT
Software for innovative information technologies
Reference:
Karpovich, V.D., Gosudarev, I.B. (2025). WebAssembly performance in the Node.js environment. Software systems and computational methods, 2, 12–34. https://doi.org/10.7256/2454-0714.2025.2.74049
Abstract:
Modern runtime environments such as browsers, Node.js, and others provide developers with tools that go beyond traditional JavaScript. This study focuses on a modern approach to building web applications where components written in different programming languages can be executed and shared using WebAssembly. The subject of the research is the testing and analysis of performance benchmarks comparing JavaScript and WebAssembly modules in the Node.js runtime. The focus is on evaluating performance in computational tasks, memory interaction, data processing, and cross-language communication. The author thoroughly explores topics such as WebAssembly integration in applications, its advantages for resource-intensive tasks like image processing, and the objectivity, representativeness, and reproducibility of the tests. The work follows an applied, experimental approach. It includes performance comparisons between pure JavaScript and WebAssembly modules. Metrics like response time and system resource consumption were used to assess efficiency. The scientific novelty of this work lies in the development and theoretical grounding of testing approaches for web applications using WebAssembly. Unlike most existing studies focused on WebAssembly's performance and security in browser environments, this work emphasizes automated testing of WebAssembly modules outside the browser — a relatively unexplored area until now. A methodological approach is proposed for testing WebAssembly modules in Node.js, including principles for test structuring, integration with JavaScript components, and execution analysis. This approach takes into account the specifics of the server environment, where WebAssembly is increasingly used — particularly for high-load computational modules, cross-language logic, and secure isolated execution. The novelty also lies in defining criteria to evaluate whether certain application components are suitable for migration to WebAssembly in terms of testability, providing developers with a tool for making architectural decisions. The proposed ideas are backed by experimental results, including test case implementations for WebAssembly and JavaScript interaction scenarios.
Keywords:
compilation and interpretation, Algorithmic optimization, experimental analysis, server-side processing, technology integration, images processing, heavy processing tasks, performance optimization, JavaScript, WebAssembly
Operating systems
Reference:
Bondarenko, O.S. (2025). Analysis of DOM update methods in modern web frameworks: Virtual DOM and Incremental DOM. Software systems and computational methods, 2, 35–43. https://doi.org/10.7256/2454-0714.2025.2.74172
Abstract:
The article presents an analysis of modern methods for updating the Document Object Model (DOM) structure in popular client-side web frameworks such as Angular, React and Vue. The main focus is on comparing the concepts of Virtual DOM and Incremental DOM, which underlie the architectural solutions of the respective frameworks. The Virtual DOM used in React and Vue operates on a virtual tree, compares its versions in order to identify differences and minimize changes in the real DOM. This approach provides a relatively simple implementation of the reactive interface, but comes with additional costs for computing and resource usage. In contrast, Angular uses an Incremental DOM, which does not create intermediate structures: changes are applied directly through the Change Detection mechanism. This approach allows to achieve high performance through point updates of DOM elements without the need for a virtual representation. The study uses a comparative analysis of architectural approaches to updating the DOM, based on the study of official documentation, practical experiments with code and visualization of rendering processes in Angular and React. The methodology includes a theoretical justification, a step-by-step analysis of the update mechanisms and an assessment of their impact on performance. The scientific novelty of the article lies in the systematic comparison of architectural approaches to updating the DOM in leading frameworks, with an emphasis on the implementation of the signal model in Angular version 17+. The impact of using signals on the abandonment of the Zone library is analyzed in detail.js and the formation of a more predictable, deterministic rendering model, as well as lower-level performance management capabilities. The article contains not only a theoretical description, but also practical examples that reveal the behavior of updates in real-world scenarios. The nuances of template compilation, the operation of the effect() and computed() functions are also considered. The comparison of Virtual DOM and Incremental DOM makes it possible to identify key differences, evaluate the applicability of approaches depending on the tasks and complexity of the project, and also suggest ways to optimize frontend architect
Keywords:
diffing, rendering, signals, Vue, React, Angular, Incremental DOM, Virtual DOM, DOM, template compilation
Forms and methods of information security administration
Reference:
Krohin , A.S., Gusev, M.M. (2025). Analysis of the impact of prompt obfuscation on the effectiveness of language models in detecting prompt injections. Software systems and computational methods, 2, 44–62. https://doi.org/10.7256/2454-0714.2025.2.73939
Abstract:
The article addresses the issue of prompt obfuscation as a means of circumventing protective mechanisms in large language models (LLMs) designed to detect prompt injections. Prompt injections represent a method of attack in which malicious actors manipulate input data to alter the model's behavior and cause it to perform undesirable or harmful actions. Obfuscation involves various methods of changing the structure and content of text, such as replacing words with synonyms, scrambling letters in words, inserting random characters, and others. The purpose of obfuscation is to complicate the analysis and classification of text in order to bypass filters and protective mechanisms built into language models. The study conducts an analysis of the effectiveness of various obfuscation methods in bypassing models trained for text classification tasks. Particular attention is paid to assessing the potential implications of obfuscation for security and data protection. The research utilizes different text obfuscation methods applied to prompts from the AdvBench dataset. The effectiveness of the methods is evaluated using three classifier models trained to detect prompt injections. The scientific novelty of the research lies in analyzing the impact of prompt obfuscation on the effectiveness of language models in detecting prompt injections. During the study, it was found that the application of complex obfuscation methods increases the proportion of requests classified as injections, highlighting the need for a thorough approach to testing the security of large language models. The conclusions of the research indicate the importance of balancing the complexity of the obfuscation method with its effectiveness in the context of attacks on models. Excessively complex obfuscation methods may increase the likelihood of injection detection, which requires further investigation to optimize approaches to ensuring the security of language models. The results underline the need for the continuous improvement of protective mechanisms and the development of new methods for detecting and preventing attacks on large language models.
Keywords:
fuzzing, AI security, transformers, encoder, adversarial attacks, AI, jailbreak, obfuscation, prompt injection, LLM
Software for innovative information technologies
Reference:
Ratushniak, E.A. (2025). Research on performance in modern client-side web-frameworks. Software systems and computational methods, 2, 63–78. https://doi.org/10.7256/2454-0714.2025.2.74392
Abstract:
The subject of the study is the comparative rendering performance of three modern frameworks — React, Angular, and Svelte — in typical scenarios of building and updating user interfaces in web applications. The object of the study is the frameworks themselves as complexes of technological solutions, including change detection mechanisms, virtual or compiled DOM structures, and accompanying optimizations. The author thoroughly examines aspects of the topic such as initial and subsequent rendering, element update and deletion operations, and working with linear and deeply nested data structures. Special attention is paid to the practical significance of choosing a framework for commercial products, where performance differences directly impact conversion, user experience, and the financial efficiency of the project. Key internal mechanisms are described — React's virtual DOM, Angular's change detector, and Svelte's compiled code — which determine their behavior in various load scenarios. The methodology is based on an automated benchmark: a unified set of test scenarios is executed by client applications on React, Angular, and Svelte, a reference JavaScript solution, and an Express JS orchestrator server; operation times are recorded using performance.now() in Chrome 126, with Time To First Paint as the performance criterion. The novelty of the research lies in the comprehensive laboratory comparison of the three frameworks across four critically important scenarios (initial rendering, subsequent rendering, updating, and deleting elements) considering two types of data structures and referencing the current versions of 2025. The main conclusions of the study are as follows: Svelte provides the lowest TTFP and leads in deep hierarchy scenarios due to the compilation of DOM operations; React shows better results in re-rendering long lists, using an optimized diff algorithm and element keys; Angular ensures predictability and architectural integrity but increases TTFP by approximately 60% due to the change detector. There is no universal leader; a rational choice should rely on the analytical profile of the operations of a specific application, which is confirmed by the results of the presented experiment.
Keywords:
web interfaces, performance, incremental DOM, virtual DOM, Core Web Vitals, Svelte, Angular, React, JavaScript frameworks, rendering
Mathematical models and computer simulation experiment
Reference:
Chikaleva, Y.S. (2025). Analysis of Microservices Granularity: Effectiveness of Architectural Approaches. Software systems and computational methods, 2, 79–93. https://doi.org/10.7256/2454-0714.2025.2.74386
Abstract:
Modern information systems require scalable architectures for processing big data and ensuring availability. Microservice architecture, based on decomposing applications into autonomous services focused on business functions, addresses these challenges. However, the optimal granularity of microservices impacts performance, scalability, and manageability. Suboptimal decomposition leads to anti-patterns, such as excessive fineness or cosmetic microservice architecture, complicating maintenance. The aim of the study is a comparative analysis of methods for determining the granularity of microservices to identify approaches that provide a balance of performance, flexibility, and manageability in high-load systems. The object of the study is the microservice architecture of high-load systems. The subject of the research is the comparison of granularity methods, including monolith, DDD, Data-Driven Approach, Monolith to Microservices, and their impact on the system. The study employs an experimental approach, including the implementation of a Task Manager application in four architectural configurations. Load testing was conducted using Apache JMeter under a load of 1000 users. Performance metrics (response time, throughput, CPU), availability, scalability, security, and consistency were collected via Prometheus and processed to calculate averages and standard deviations. The scientific novelty lies in the development of a methodology for comparative analysis of decomposition methods using unified metrics adapted for high-load systems, setting this study apart from works that focus on qualitative assessments. The results of the experiment showed that the monolithic architecture provides the minimum response time (0.76 s) and high throughput (282.5 requests/s) under a load of 1000 users, but is limited in scalability. The Data-Driven Approach ensures data consistency, DDD is effective for complex domains, while Monolith to Microservices demonstrates low performance (response time 15.99 s) due to the overload of the authorization service. A limitation of the study is the use of a single host system (8 GB RAM), which may restrict the scalability of the experiment. The obtained data are applicable for designing architectures of high-load systems. It is recommended to optimize network calls in DDD (based on response time of 1.07 s), data access in Data-Driven (response time of 5.49 s), and to carefully plan decomposition for Monolith to Microservices to reduce the load on services (response time of 15.99 s).
Keywords:
experimental analysis, data consistency, scalability, performance, Monolith to Microservices, Data-Driven Approach, Domain-Driven Design, granularity of microservices, microservice architecture, high-load systems