Reference:
Skacheva N.V..
Analysis of Idioms in Neural Machine Translation: A Data Set
// Software systems and computational methods.
2024. № 3.
P. 55-63.
DOI: 10.7256/2454-0714.2024.3.71518 EDN: JLJDSL URL: https://en.nbpublish.com/library_read_article.php?id=71518
Abstract:
There has been a debate in various circles of the public for decades about whether a "machine can replace a person." This also applies to the field of translation. And so far, some are arguing, others are "making a dream come true." Therefore, now more and more research is aimed at improving machine translation systems (hereinafter MP). To understand the advantages and disadvantages of MP systems, it is necessary, first of all, to understand their algorithms. At the moment, the main open problem of neural machine translation (NMP) is the translation of idiomatic expressions. The meaning of such expressions does not consist of the meanings of their constituent words, and NMT models tend to translate them literally, which leads to confusing and meaningless translations. The research of idioms in the NMP is limited and difficult due to the lack of automatic methods. Therefore, despite the fact that modern NMP systems generate increasingly high-quality translations, the translation of idioms remains one of the unsolved tasks in this area. This is due to the fact that idioms, as a category of verbose expressions, represent an interesting linguistic phenomenon when the general meaning of an expression cannot be made up of the meanings of its parts. The first important problem is the lack of special data sets for learning and evaluating the translation of idioms. In this paper, we solve this problem by creating the first large-scale dataset for translating idioms. This data set is automatically extracted from the German translation corpus and includes a target set in which all sentences contain idioms, and a regular training corpus in which sentences containing idioms are marked. We have released this dataset and are using it to conduct preliminary experiments on NMP as a first step towards improving the translation of idioms.
Keywords:
Russian, German, Neural Machine Translation, machine translation, bilingual corpora, idioms, multiword expression, language pairs, systems, Data Set
Reference:
Sklyar A.Y..
Numerical methods for finding the roots of polynomials with real and complex coefficients
// Software systems and computational methods.
2024. № 3.
P. 64-76.
DOI: 10.7256/2454-0714.2024.3.71103 EDN: KTJPCE URL: https://en.nbpublish.com/library_read_article.php?id=71103
Abstract:
The subject of the article is the consideration and analysis of a set of algorithms for numerically finding the roots of polynomials, primarily complex ones based on methods for searching for an approximate decomposition of the initial polynomials into multipliers. If the numerical finding of real roots usually does not cause difficulties, then a number of difficulties arise with finding complex roots. This article proposes a set of algorithms for sequentially finding multiple roots of polynomials with real roots, then real roots by highlighting intervals that potentially contain roots and obviously do not contain them, and then complex roots of polynomials. To find complex roots, an iterative approximation of the original polynomial by the product of a trinomial by a polynomial of a lesser degree is used, followed by the use of the tangent method in the complex domain in the vicinity of the roots of the resulting trinomial. To find the roots of a polynomial with complex coefficients, we propose a solution to an equivalent problem with real coefficients. The implementation of the tasks is carried out by step-by-step application of a set of algorithms. After each stage, a group of roots is allocated and the same problem is solved for a polynomial of lesser degree. The sequence of the proposed algorithms makes it possible to find all the real and complex roots of the polynomial. To find the roots of a polynomial with real coefficients, an algorithm is constructed that includes the following main steps: determining multiple roots with a corresponding decrease in the degree of the polynomial; allocating a range of roots; finding intervals that are guaranteed to contain roots and finding them, after their allocation, it remains to find only pairs of complex conjugate roots; iterative construction of trinomials that serve as an estimate of the values of such pairs with minimal the accuracy sufficient for their localization; the actual search for roots in the complex domain by the tangent method. The computational complexity of the proposed algorithms is polynomial and does not exceed the cube of the degree of the polynomial, which makes it possible to obtain a solution for almost any polynomials arising in real problems. The field of application, in addition to the polynomial equations themselves, is the problems of optimization, differential equations and optimal control that can be reduced to them.
Keywords:
recursive algorithms, roots of polynomials, conjugate complex roots, algebraic equation, numerical algorithms, numerical methods, iterative methods, root finding, polynomials, root localization
Reference:
Filippova K.A., Ayusheev T.V., Damdinova T.T., Tsidipov T.T..
Investigation of the stress–strain state of a composite blade in ANSYS WorkBench
// Software systems and computational methods.
2024. № 2.
P. 41-52.
DOI: 10.7256/2454-0714.2024.2.70712 EDN: XDTLCG URL: https://en.nbpublish.com/library_read_article.php?id=70712
Abstract:
In this paper, the static strength of a UAV blade made of composite material was calculated. Composite materials have an advantage over traditional materials (metals and alloys) in the field of aviation – gain in weight, low sensitivity to damage, high rigidity, high mechanical characteristics. At the same time, the identification of vulnerabilities in a layered structure is a difficult task and in practice is solved with the help of destructive control. Composite materials available in the ANSYS materials library were used in the modeling: Epoxy Carbon Woven (230 Gpa) Prepreg woven carbon fiber in the form of a semi–finished prepreg impregnated with epoxy resin carbon fiber with Young's modulus E=230 GPa and Epoxy Carbon (230 Gpa) Prepreg is a unidirectional carbon fiber prepreg impregnated with epoxy resin with a Young's modulus E=230 GPa. Modern software products, such as ANSYS WorkBench, allow comprehensive investigation of the layered structure. Several variants of blade designs with different fillers as the median material were investigated. The forward and reverse destruction criteria based on the Tsai-Hill theory were used. The influence of gravity was not taken into account. It is shown that the developed blade design meets the requirements. Balsa wood, pine, aspen and polyurethane foam were chosen as the middle material of the blade. Pine and aspen wood were selected according to the criteria of their availability and having the lowest density. The materials library of the ANSYS WorkBench software package used does not have characteristics for all of them, so the characteristics of the selected materials (pines and aspens) were added manually. For modeling and calculations in the ANSYS WorkBench program, such characteristics as density, axial elastic modulus, Poisson's coefficients, shear modulus and tensile and compressive strength limits are required.
Keywords:
middle fillers, fiberglass, carbon fabrics, Tsai-Hill theory, failure criterion, stress, ANSYS WorkBench, static strength, blade, composite material
Reference:
Kovalev S., Smirnova T., Filippov V., Andreeva A..
About Modeling Digital Twins of a Social Group
// Software systems and computational methods.
2022. № 4.
P. 75-83.
DOI: 10.7256/2454-0714.2022.4.39264 EDN: MPUQIE URL: https://en.nbpublish.com/library_read_article.php?id=39264
Abstract:
The object of the study is mathematical modeling methods. The subject of the study is the application of mathematical methods in modeling digital twins of a social group. The aim of the work is to model the digital counterparts of a social group. A digital double is a digital copy of a physical object or process, with the right approach, it helps to improve the main and auxiliary business processes. This concept is part of the fourth industrial revolution and is designed to help detect problems faster, find out what will happen to the original in different conditions and, as a result, produce better products. In this article, some applied aspects are considered, the main provisions of the mathematical theory of digital twins of social groups are presented. To solve the problem of creating a digital double of a social group (students) as one of the tools, the authors proposed to use the technologies of population algorithms. The novelty of the research consists in the application of the swarm part algorithm for modeling digital twins of a social group. The particle swarm method was chosen as a research tool. As the social group under study finds the optimal position in space, so the element of the digital twin of the particle swarm model based on them can search in space, in particular, the extremes of functions. Which, for example, is applicable to finding the minimum of the loss function in machine learning. A graphical simulation in the Java Script language was performed using the three library.js. Data processing was performed using the C# Job System, which provides parallelization of computing processes and is integrated into the Entity Component System. A program was implemented that simulates the activity of a student group as one of the constituent elements of a digital twin of a social group. Swarm algorithms are promising in the field of practical application. On their basis, it is possible not only to solve the problems of digital twins, but also to manage groups of robots, robotic systems and complexes.
Keywords:
library Job System, population algorithms, JavaScript language, boids model, swarm algorithms, mathematical model, social group, digital twin, search algorithm, model parallelization
Reference:
Damdinova T.T., Ayusheev T.V., Balzhinimaeva S.M., Abatnin A.A..
Modeling of bodies with spherical pores by generalized linear interpolation
// Software systems and computational methods.
2022. № 2.
P. 42-51.
DOI: 10.7256/2454-0714.2022.2.38262 EDN: ZTFTKU URL: https://en.nbpublish.com/library_read_article.php?id=38262
Abstract:
The article offers a description of parametric objects with spherical pores by generalized linear interpolation. Increasing the volume of high-resolution image data requires the development of algorithms capable of processing large images with reduced computational costs. Numerical data on the geometry of the pores of the object under study are transformed into the geometry of bodies consisting of octagonal portions of cubic shape. Parametric porous objects can model both the shape and the isoparametric interior. Often, this type of parametric bodies is used as initial or boundary conditions in numerical modeling to demonstrate internal modeling. To form a body of complex shape, parametric solid-state elements can be connected together. The continuity between the elements can be determined in the same way as when modeling cubic parametric splines. A lot of research is devoted to the reconstruction of the geometric structure of porous materials based on digital images of objects for a better understanding and representation of physical processes in a porous medium. A detailed understanding of the microstructure can be used to determine physical properties, and then to evaluate and improve the characteristics of simulated objects and processes in them. The article presents the results of the proposed algorithm in the MathCAD environment and software processing of a porous body based on digital images.
Keywords:
OpenSCAD, MathCAD, boolean operations, spherical pores, digital image, parametric splines, the Koons method, linear interpolation, porous bodies, geometric modeling
Reference:
Pekunov V.V..
Design of K-W-NET model of turbulence based on K-W/V2-F models with the neural network component
// Software systems and computational methods.
2021. № 3.
P. 52-58.
DOI: 10.7256/2454-0714.2021.3.36054 URL: https://en.nbpublish.com/library_read_article.php?id=36054
Abstract:
The subject of this article is the models of turbulence based on introduction of neural network components into the widespread standard semi-empirical models. It is stated that such technique allows achieving significant acceleration of calculation while maintaining sufficient accuracy and stability, by training neural network components based on the data acquires with the use of fairly accurate and advanced models, as well as replacing and complementing separate fragments of the initial models with such components. An overview is give on the existing classical approaches towards modeling of turbulence, which allows determining the V2-F model suggested by Durbin as one of the most advanced, and thereby promising, with regards to subsequent neural network modifications. The author offers the new model of turbulence based on K-W models paired with a neural network component trained in accordance with the V2-F Durbin model. All necessary ratios are provided. The properties of the obtained model are examined in terms of the numerical experiment on the flow over of a single obstacle. The results are compared with data acquired from other semi-empirical models (K-E, K-W), as well as via direct neural network model. It is demonstrated that the proposed model, with less computational labor output in comparison with other models (excluding direct neural network, which, however, is less accurate), provides high precision close to precision of the Durbin model.
Keywords:
aerodynamics, flow near obstacle, feed-forward network, numerical experiment, continuous media mechanics, Durbin's model, neural network, turbulence model, calculation speedup, semi-empirical models
Reference:
Demichev M.S., Gaipov K.E., Demicheva A.A., Faizulin R.F., Malyshev D.O..
Frequency scheduling algorithm with the allocation of the main and additional frequency bands.
// Software systems and computational methods.
2021. № 2.
P. 36-62.
DOI: 10.7256/2454-0714.2021.2.35214 URL: https://en.nbpublish.com/library_read_article.php?id=35214
Abstract:
The subject of this research is the frequency planning algorithm for networks with an arbitrary topology of links over radio channels. The algorithm determines the total number of non-overlapping frequency ranges for the entire network and provides the distribution of each frequency range between communication nodes. The algorithm consists of two stages: at the first stage, there is a search and simultaneous distribution of frequency channels, the so-called main frequency range, as a result, only one frequency range is allocated to each node; at the second stage, additional frequency channels are searched for, which can be used by a separate subset of nodes, thus , some nodes can use more than one frequency range, but several at once. The novelty of this research lies in the developed frequency planning algorithm for wireless communication systems with an arbitrary topology of communications over radio channels. The result of the operation of the algorithm for a wireless communication system is the allocation of radio frequencies for communication nodes from the common frequency band allocated for the wireless communication system, in terms of reuse, eliminating the effect of interference. The result for communication nodes is the allocation of a baseband and an additional frequency band, taking into account the topology of the radio network, which can be used by a separate subset that makes wireless communication systems resistant to narrowband random interference.
Keywords:
Communication center, Matrix, Algorithm, Communication channel, Frequency band, Radio communication, Frequency planning, Reservation, Dynamic allocation, Static distribution
Reference:
Dubanov A.A..
Modeling of trajectory of the pursuer in space using the method of constant-bearing approach
// Software systems and computational methods.
2021. № 2.
P. 1-10.
DOI: 10.7256/2454-0714.2021.2.36014 URL: https://en.nbpublish.com/library_read_article.php?id=36014
Abstract:
This article examines the model of pursuit task, when the pursuer while moving in space, adheres to the strategy of constant-bearing approach. The velocity modules of the pursuer and target are constant. The object moves evenly and straightforwardly, for certainty of the model, since the test program is written based on the materials of the article. The velocity vectors of the target and the pursuer in the beginning of the pursuit are directed arbitrarily. The iterative process consists of the three parts. Calculation of trajectory of the pursuer in space, calculation of trajectory of the pursuer in a plane, calculation of the transition of trajectory from space to a plane are conducted. All parts of the iterative process have to meet the conditions specified in a task. An important condition is that the minimum radius of curvature of the trajectory should not exceed a certain set value. The scientific novelty of the geometric model consists in the possibility to regulate the time of reaching the target by changing the length of trajectory of the pursuer, as well as the orientation of a plane of pursuit. Calculation of the point of next position of the pursuer in space is the point of intersection of the sphere, cone and plane of constant-bearing approach. A plane of constant-bearing approach is perpendicular to a plane of pursuit. In the model under review, a plane of pursuit is determined by the target velocity vector and direct target that connects the pursuer and the target (sight line). The radius of the sphere is equal to the step of the pursuer for the time interval the time of the iterative process is divided into. The angle of solution of the cone is the angle by which the velocity vector of the pursuer can turn. The mathematical model presented in the article may be of interest to developers of unmanned aerial vehicles.
Keywords:
Parallel, Attainment, Cone, Trajectory, Line, Curvature, Target, Pursuer, Persuit, Plane
Reference:
Selishchev I.A., Oleinikova S.A..
Mathematical model and algorithm for solving the problem of planning the operation of multiphase systems with heterogeneous resources and time limits
// Software systems and computational methods.
2021. № 1.
P. 35-45.
DOI: 10.7256/2454-0714.2021.1.35005 URL: https://en.nbpublish.com/library_read_article.php?id=35005
Abstract:
The object of this research is the modern service and production systems, the specific functioning of which lies in a set of sequential and parallel operations with a random duration. A fundamental peculiarity of such systems is the stochastic nature of the duration of a single operation, which depends not only on the external random factors, but also on the choice of resources, and namely on the operator. This substantiates the parallel solution of the task on making a schedule of mutually dependent operations and the task on assigning the operators. In the conditions of resource and time limits, this task is NP difficult and requires the development of algorithms for developing the solution that is close to optimal in the limited time. For the development of mathematical and algorithmic software to solve this task, the author used the critical path method and PERT method, incident wave method, and methods for solving the assignment tasks. As a result, the author acquired a mathematical model that considers the stochastic nature of the duration of a single operation, which depends not only on random factors, but also on the operators. Based on such model, is formulated the optimization task that allows finding the launch time and the corresponding operators to gain the most profit. Based on the analysis of existing approaches and the specificity of the task at hand, the author proposes the algorithm for solving the task founded on successive refinement of the time characteristics of operations.
Keywords:
Optimization problem, Rolling wave planning, Critical chain method, PERT, Critical path method, Multiphase systems, Task assignment, Project management, Time constraints, Random service duration
Reference:
Pekunov V.V..
Testing of the droplet phase model during the experiment on modeling the formation of acidulous cloud
// Software systems and computational methods.
2021. № 1.
P. 46-52.
DOI: 10.7256/2454-0714.2021.1.35104 URL: https://en.nbpublish.com/library_read_article.php?id=35104
Abstract:
The problem of numerical modeling of the formation (as a result of condensation growth and droplet collisions) and development of primary acidulous cloud considers various factors: the presence of temperature gradients, turbulence, direct solar radiation heating the air and walls of buildings, diffuse solar radiation (which describes radiation cooling), transfer of gaseous pollutants and their absorption by droplets. The author earlier formulated the corresponding complex mathematical model that takes into account the aforementioned factors. This article sets the task of testing the droplet component of this model through numerical modeling of the processes in the emerging cloud, with subsequent comparison of the results with theoretical and empirical correlations. The author obtained the new results of numerical modeling of acidulous cloud in the air over a vast urban area with high-density development on the basis of the comprehensive mathematical model that takes into account the above listed factors and relies on the interpolation-sectional submodel of droplet phase. The author models the dynamics and kinetics of such cloud that absorbs gaseous sulphur dioxide; and obtains results on the intensity of absorption of this pollutant in the forming cloud. The comparison of these results with the known data (Hrgian-Mazin droplet distribution and interpolation ratio for the water level of the cloud) demonstrated quite a coincidence of droplet distribution and water level of the cloud. The conclusion is made on sufficient adequacy of application of the ecological model that includes a special submodel of droplet phase.
Keywords:
approbation, mathematical model, droplet’s size distribution, water content, acid rain, cloud, numerical simulation, interpolation relations, condensation, coalescence
Reference:
Konov E.A., Sorokoumov P.S..
The available methods of computer modeling of biofilms and their development trends
// Software systems and computational methods.
2020. № 4.
P. 53-68.
DOI: 10.7256/2454-0714.2020.4.34615 URL: https://en.nbpublish.com/library_read_article.php?id=34615
Abstract:
The object of this research is the available means of computer modeling of bacterial communities-biofilms. Such communities include the majority (95-99%) of bacteria and are ubiquitous. Biofilms are much more resistant than single bacteria to antibiotics and other antibacterial means due to weak permeability of intercellular environment and moderation of metabolism of some specimen of the community. Examination of influence techniques on biofilms is an important problem of biology; its solution requires various computer modeling tools that allow achieving significant scientific results. The subject of this research is the mathematical models used in modern techniques of biofilm modeling, and as well as the software for their implementation. The conclusion is made that the available methods of biofilm modeling are capable of successful reproduction of multiple behavioral aspects of these bacteria communities, including growth, destruction, and self-regulation; however, certain relevant problems are yet to be resolved. This is associated with the fact that the development of modern software for two-dimensional and three-dimensional agent modeling of biofilms – BSim, iDynoMiCS, CellModeller, require competent programmers for describing the interactions between simulated objects. Most promising direction in further development of this software implies a more active usage of tools for describing behavior and interaction of agents applied in the sphere of artificial intelligence, for example, fin-state automaton or production rules systems, with mandatory preservation of biological content of the models.
Keywords:
rule-based system, finite state machine, individual-based modelling, cell model, quorum sense, computer modeling, behavior model, agent model, biofilm, water treatment plant
Reference:
Avtushenko A.A., Ripetskii A.V., Avtushenko A.A..
Modeling and testing of a thermal board adapted to additive technologies for manufacturing using computational methods
// Software systems and computational methods.
2019. № 4.
P. 1-19.
DOI: 10.7256/2454-0714.2019.4.30631 URL: https://en.nbpublish.com/library_read_article.php?id=30631
Abstract:
The authors consider a geometric model of a thermal board based on actual data obtained using the methodology of adaptation to additive manufacturing technologies of heat exchangers. The question of solving the problem of finding the most effective thermal card option by the computational method among the selected options is investigated and a comparison is made with the experimental sample made by the additive technology method to verify the selected method for obtaining computational data. The authors analyze 62 selected options for thermal boards, with the adaptation of the computational grid in the thermal board matrix. For testing, a special installation was assembled and samples were made on an EOS M290 printer. The main research methods used in the work: computational and comparative-analytical methods to obtain data on the selected options and determine the effective sample, as well as experimental to verify the data.The novelty of the study lies in the development of an effective geometric form of the thermal board and the manufacture of the selected sample by the method of additive technologies. The selected geometric model of the thermal board is manufactured using the additive technology method at the lowest cost, the rods and bases of the thermal board are self-supporting elements that do not need support. The results of research tests of a prototype thermal board confirm the correctness of the selected calculation method for determining the effective design of a thermal board manufactured by the additive technology method.
Keywords:
type of heat exchanger, geometry optimization, supports, computational grid, computational method, additive manufacturing, finned rods, radioelectronic equipment, thermodynamic characteristic, test type of heat exchanger
Reference:
Pekunov V.V..
Refined calculation of droplet distributions in modeling atmospheric multiphase media
// Software systems and computational methods.
2019. № 4.
P. 95-104.
DOI: 10.7256/2454-0714.2019.4.30707 URL: https://en.nbpublish.com/library_read_article.php?id=30707
Abstract:
In this article the author considers the problem of increasing the accuracy of the search for adequate droplet distributions in the numerical simulation of multiphase media including the droplet phase. This problem is especially relevant when calculating the distributions with discontinuities that occur during intercellular droplet transfer, which have their own speed, as well as in the presence of sharp drops droplets, for example, of a technogenic nature. Tasks of this kind are often encountered in calculating the processes of formation and spread of pollutants in the air, in particular when modeling acid rain. The problem of constructing distributions is considered using the methods of computational mathematics (theory of interpolation), taking into account the physical laws of conservation of mass and number of drops. The elements of the method of moments (Hill method) and the sectional approach to modeling the droplet phase are used. A new approach is proposed for modeling droplet distributions by piecewise spline interpolation according to the density and concentration of droplet components, also relying on the constructed preliminary piecewise linear distributions. The results were compared with data obtained by direct modeling of many drops, as well as data obtained using exclusively piecewise linear distributions. The higher accuracy of the proposed approach is demonstrated in comparison with the original method using only piecewise linear distributions and a rather high calculation speed is shown in comparison with the Lagrangian approach.
Keywords:
multi-component flows, pollutant absorption, evaporation, condensation, multi-phase media, splines, interpolation, droplet distribution, method of moments, calculation speedup
Reference:
Zinoviev A.N..
The technology for observing the effect of thermo-gravitational anomaly by the equipment of the space radio telescope of the RadioAstron project on the orbital resonance interval.
// Software systems and computational methods.
2019. № 4.
P. 39-54.
DOI: 10.7256/2454-0714.2019.4.31209 URL: https://en.nbpublish.com/library_read_article.php?id=31209
Abstract:
The object of study is the Spektr-R spacecraft with a space radio telescope on board. As sensors, the equipment of two onboard hydrogen frequency standards is used. Long-term monitoring of telemetric and scientific information transmitted from the spacecraft, suggested the presence of external physical influences on the equipment of the space radio telescope. The evolving elliptical orbit of the Spektr-R spacecraft allowed us to scan the region of the orbital resonance of the spacecraft with the Moon and estimate the magnitude of the physical effect on the onboard hydrogen frequency standard. As a method of measuring the range of the Spectrum-R spacecraft, the reverberation-correlation method was used, which allows synchronous range measurements to be made during radio astronomy observations. The presented technology confirmed the assumption of the gravitational nature of the physical impact on the elements of the space radio telescope. A new method for detecting orbital resonance on board a spacecraft has been obtained. Changing the pressure of molecular hydrogen under conditions of orbital resonance significantly reduced the synchronization error of the onboard and ground quantum time scales.
Keywords:
molecular detector, space radio telescope, ion pump, gravitational wave, molecular hydrogen pressure, highly informative radio channel, hydrogen frequency standard, time dilation, spacecraft, quantum time scale
Reference:
Panchuk K.L., Myasoedova T.M..
Description of a discretely defined flat contour with a composite line of rational second-order Bezier curves
// Software systems and computational methods.
2019. № 3.
P. 49-60.
DOI: 10.7256/2454-0714.2019.3.30637 URL: https://en.nbpublish.com/library_read_article.php?id=30637
Abstract:
The object of study is shaping of the curve of the line on a discrete set of source data. In this case, a discrete series of points-nodes with tangents in them and the value of the curvature of the first segment in its initial node are taken as initial data. The subject of the study is a fractional rational Bezier curve of the second order. The authors investigate in detail the aspects of obtaining segments of rational Bezier curves in the direction of docking their C2 smoothness in order to obtain a Bezier spline.A mathematical method is applied based on the analytical representation of fractional rational Bezier segments of the second order using the apparatus of mathematical analysis and differential calculus. The novelty of the study lies in the fact that the obtained mathematical model of the spline allows you to directly indicate in the process of shaping the types of segments that make it up: parabolic, elliptical or hyperbolic. It is shown that the standard form of the Bezier curve representation can be reduced to a simpler form. This proposed model is qualitatively different from existing models. Numerical examples of obtaining open and closed Bezier spline are considered.
Keywords:
parametric representation of the curve, closed loop, C2 smoothness order, rational fractional curve, Bezier curve, spline interpolation, parameterization, shaping, analytical method, discrete set
Reference:
Bulychev R.N..
A method for constructing a multidimensional model for finding values of molding process parameters
// Software systems and computational methods.
2019. № 3.
P. 61-72.
DOI: 10.7256/2454-0714.2019.3.30474 URL: https://en.nbpublish.com/library_read_article.php?id=30474
Abstract:
The object of study is the geometric model of the incremental molding process of parts from sheet material.The subject of the study is a graphical model for finding the optimal values of the parameters of the molding process based on multidimensional descriptive geometry. The author in the article examined the main optimizing factors and process parameters. Particular attention is paid to the problems of constructing geometric models for determining optimizing factors (contact zone, processing time and surface quality of the resulting part) and parameter optimization (profile and trajectory of the shaping tool) for incremental shaping.The research method is a method of constructing a graphical optimization model of the process using the projection drawing of Radishchev for multidimensional space. Mathematical modeling was also applied to verify the correctness of the obtained optimal parameters. The novelty of the study lies in the application of multidimensional descriptive geometry methods for a multifactor, multicomponent system in the study of the incremental molding process in order to improve the quality of manufactured parts. Main conclusions. Using a graphical optimization model based on multidimensional descriptive geometry, a range of values of incremental molding parameters was obtained for a product of predicted quality. The process was simulated with the obtained parameters. An incremental molding of a conical-shaped sheet metal part was carried out. The results obtained correspond to the specified quality criteria.
Keywords:
optimizing factors, parameter optimization, optimization, Radishchev's drawing, sheet stamping, layer-by-layer deformation, incremental sheet forming, process parameters, multidimensional geometry, modeling
Reference:
Pesoshin V.A., Kuznetsov V.M., Rakhmatullin A.K..
Synthesis and analysis of search algorithms for a set of inverse-segment sequences with a given period
// Software systems and computational methods.
2019. № 3.
P. 73-84.
DOI: 10.7256/2454-0714.2019.3.30541 URL: https://en.nbpublish.com/library_read_article.php?id=30541
Abstract:
The problem of synthesis of search algorithms for a set of inverse-segment sequences with a given period is considered. Inverse-segment sequences are pseudorandom sequences having the likelihood of the occurrence of symbols 0 and 1 and the variety of their autocorrelation functions. In hardware implementation, the most widely used are inverse-segment sequences formed by pseudorandom sequence generators based on an n-bit shift register with linear feedbacks. To use inverse-segment sequences in applied problems, as well as further study of their properties, it is necessary to solve the problem of finding a set of inverse-segment sequences with a given period, and the number of disjoint sequences increases significantly with increasing n. A number of basic approaches to the synthesis of inverse-segment sequence search algorithms are considered, and the analysis of the obtained algorithms, their algorithmic complexity, and memory consumption are carried out. Inverse segment sequences are of practical interest due to the equiprobability of the occurrence of symbols 0 and 1 and the variety of their autocorrelation functions. A number of algorithms for finding inverse-segment sequences with a given period, with the principle of enumerating the initial states of the shift register at the base, are investigated. All the considered algorithms use an abbreviated method of storing inverse segment sequences in various data structures using the number of one-to-one given inverse segment sequences. Studied their algorithmic complexity and memory consumption. Recommendations have been made for choosing an algorithm for searching inverse-segment sequences and the synthesis of new algorithms.
Keywords:
computational complexity, optimization, algorithm analysis, autocorrelation, Bloom Filter, shift register, pseudo-random number generators, inverse-segmental sequences, red-black trees, probabilistic data structures
Reference:
Pozolotin V.E., Sultanova E.A..
Application of data transformation algorithms in time series analysis for elimination of outliers
// Software systems and computational methods.
2019. № 2.
P. 33-42.
DOI: 10.7256/2454-0714.2019.2.28279 URL: https://en.nbpublish.com/library_read_article.php?id=28279
Abstract:
The subject of the research is data conversion algorithms for eliminating outliers in time series. The author considers data conversion algorithms based on arithmetic mean and median, as well as combined smoothing methods like 4253Н and 3RSSH. The author considers such aspects of the topic as changing the statistical characteristics of the time series when applying transformations, and also pays attention to the issues of visual presentation of data and changing the behavior of the series when introducing outliers into the time series. When writing the work, both theoretical and empirical research methods were used: the work and software systems that affect these issues were studied, and a series of experiments was conducted. Computational experiments on processing the time series have been carried out both without emissions and with emissions for smoothing. A comparison of the results of processing time series. A software tool is proposed that allows the use of various smoothing filters. The software tool has been tested for working with various characteristics of the input data.
Keywords:
outliers, smoothing by average, smoothing by median, smoothing, filter, transformation, time series transformation, information processing, 4253H filter, 3RSSH filter
Reference:
Ohanyan V.K., Bardakhchyan V.G., Simonyan A.R., Ulitina E.I..
Fuzzification of convex bodies in Rn
// Software systems and computational methods.
2019. № 2.
P. 1-10.
DOI: 10.7256/2454-0714.2019.2.29894 URL: https://en.nbpublish.com/library_read_article.php?id=29894
Abstract:
The paper is dedicated to the generalization of Matheron’s theorem about covariogram to the case where possible estimation error occurs, modelled by fuzzification of convex bodies. The classic case of identification of convex bodies does not consider the cases when input information and measurement contain error term. This is general issue when applying line segment distributions to recover covariogram and later the body itself. The authors define a body and ask what result will be for the length distribution for the given fuzzy body. Through this procedure the authors generalize Matheron’s theorem for this case. The authors extensively use fuzzy statistics and fuzzy random variables to extend convex bodies and length distribution functions to the fuzzy case. The authors use several properties of fuzzy numbers and fuzzy calculus techniques (mainly Aumann integration).The authors introduce generalized fuzzy distribution to apply them in a general setting of fuzzy convex bodies. Fuzzy convex bodies are defined by adding to the convex body and subtracting (in Hukuhara sense) from its fuzzy numbers in Rn. Then the generalization of Matheron’s theorem for a fuzzy case is derived, based on fuzzy function calculus techniques. Fuzzy convex bodies can be seen as a collection of convex bodies. The authors introduced fuzzy covariogram based on fuzzy convex bodies.
Keywords:
Еstimation error, V-lines, Gaussian field, Aumann integration, Integral geometry, Fuzzy covariogram, Fuzzy distribution, Сonvex body, Matheron’s theorem, Hukuhara Differentiability
Reference:
Ponomarev A..
Application of probabilistic graphical models for data aggregation in large-scale human-machine computing systems
// Software systems and computational methods.
2019. № 1.
P. 59-69.
DOI: 10.7256/2454-0714.2019.1.29446 URL: https://en.nbpublish.com/library_read_article.php?id=29446
Abstract:
The article is devoted to the problem of ensuring the quality of results in information processing systems, where some operations are performed with the involvement of people, interaction with whom is carried out via the Internet. Such systems are widely used in solving various tasks, but the involvement of a person in information processing tasks is associated with a set of fundamental limitations inherent in a person: low speed of information processing, the need for motivation, the possibility of errors or purposeful distortion of information. Thus, the development of methods and tools for managing the quality of results obtained with the help of such systems is an urgent task. The article proposes a model of data aggregation to improve the quality of results obtained using large-scale human-machine computing. The application of the model is considered by the example of solving the problem of marking and searching for images obtained as part of mass athletics events (runs). The assessment of the effect of aggregation is carried out on the basis of simulation modeling. The results of the study of the proposed approach have shown that integration is especially effective in conditions of poor-quality markup. However, even in conditions of high-quality markup, the use of aggregation allows you to increase the completeness of search results. In general, it can be concluded that the use of data aggregation in the processing of human-machine computing results is a promising approach, and the use of probabilistic graphical models for aggregation allows you to smoothly increase the accuracy of the results of the system with an increase in the amount of available information.
Keywords:
Probabilistic graphical models, Bayesian networks, Data aggregation, Data processing, Annotating images, Image markup, Crowdsourcing, Crowd computing, Collective intelligence, Information search
Reference:
Zinov'ev A.N..
Synchronous range measurement technology using the reverberation effect in terms of the orbital flight of the RadioAstron space radio telescope
// Software systems and computational methods.
2019. № 1.
P. 107-119.
DOI: 10.7256/2454-0714.2019.1.29494 URL: https://en.nbpublish.com/library_read_article.php?id=29494
Abstract:
The author considers in detail such aspects of the topic as a result of the discovery of a new effect, which consists in the appearance of characteristic responses due to the signal being retransmitted in the direction from the space radio telescope to the ground tracking station in the "KOGERENT" mode. This mode involves the use of a terrestrial frequency standard signal to form the necessary sets of heterodyne, as well as clock frequencies on board the space radio telescope. Switching on the equipment of the phase synchronization loop of the highly informative radio channel is necessary for the implementation of the "KOGERENT" mode. A technology is proposed that allows measurements of the range from the Spektr-R spacecraft to the ground tracking station during radio astronomy sessions by the ground-space interferometer of the RadioAstron project. The novelty and practical significance of the study lies in the fact that in comparison with laser measurement technologies, this technology is significantly less dependent on weather conditions in the Earth’s atmosphere. Lists ways to increase resolution and reduce range measurement error using the detected effect. The use of the presented technology allows to level the disadvantage of the RadioAstron project.
Keywords:
phase lock loop, spacecraft, interferometer, Earth's atmosphere, Spektr-R, KOGERENT, RadioAstron, reverberation, space radio telescope, Doppler shift
Reference:
Trub I..
On approximation of the output of a probabilistic model of hierarchical bit indices
// Software systems and computational methods.
2018. № 4.
P. 102-113.
DOI: 10.7256/2454-0714.2018.4.27809 URL: https://en.nbpublish.com/library_read_article.php?id=27809
Abstract:
The subject of the study is a probabilistic model of hierarchical bit indexes of databases. The object of the study is the output of the model — a three-parameter discrete distribution of the number of indexes for implementing queries to the database, parametrized by the intensity of recording records in the database, the average query length, and the size of a large index. The author considers such aspects of the topic as the choice of a hypothesis from known theoretical distributions, a method for testing a hypothesis, selection of functions for approximating the dependence of the expectation on the third parameter, selection of a function for approximating the dependence of the minimum point of the expectation for the third parameter from the first two. The study of such dependencies is explained by the fact that the optimal choice of the third parameter is the goal of the designer, and the first two are the initial data of the model. The methodology of the research is the methods of mathematical statistics, in particular, the estimation of parameters and the Pearson criterion of testing hypotheses, methods for constructing the best approximations, in particular, the method of least squares, the theory of curves of the third order. The main conclusions of the study: the best approximation for the studied family of distributions is the Polya distribution; The best approximations for the dependence of the expectation on the third parameter are the Bacon-Watts model and the heat capacity model. A special contribution of the author to the study of the topic is the derivation of an empirical formula that has practical significance. It allows the designer on the basis of the first two parameters at once, without using cumbersome calculations on the model, to obtain an approximate optimal value of the third parameter and thus construct an index of the database of the optimal size. The novelty of the research lies in obtaining approximate dependencies for a new type of distribution that cannot be described by a closed formula.
Keywords:
power function, output analysis, least square method, heat capacity model, third order curves, Bacon-Watts model, Polia distribution, negative binomial distribution, discrete probability distribution, hierarchical bitmap indexes
Reference:
Butusov D.N., Karimov A.I., Tutueva A.V., Krasil'nikov A.V., Goryainov S.V., Voznesensky A.S..
Hybrid simulation of the Rossler system by synchronizing analog and discrete models
// Software systems and computational methods.
2018. № 4.
P. 1-14.
DOI: 10.7256/2454-0714.2018.4.27828 URL: https://en.nbpublish.com/library_read_article.php?id=27828
Abstract:
The article explores the technology of hybrid modeling of chaotic systems in the form of synchronization of digital and analog models of the Rossler system, interacting via analog-digital and digital-analog conversion paths. The unidirectional and bidirectional variants of chaotic synchronization are considered, and the synchronization error is estimated for each of the specified cases. For the analog implementation of the Rössler system, a circuit has been developed based on operational amplifiers, multipliers, and precision passive elements. The digital model of the system is based on a semi-implicit hardware-oriented method of numerical integration of the second order of algebraic accuracy. In order to substantiate the choice of the method, graphs of the performance of various solvers of ordinary differential equations are presented when simulating the Rössler system. It is shown that the chosen semi-implicit numerical integration method has the highest computational efficiency among all second-order methods. Experimentally demonstrated the ability to synchronize analog and digital models of a chaotic system. The synchronization of two and three models of the Rossler system in various variants of the connection topology is considered. By analyzing the synchronization error, it is shown that the greatest accuracy is achieved when using a fully coupled topology, which is based on the bi-directional synchronization method of the three models of the Rössler system.
Keywords:
semi-implicit method, Rossler system, digital-to-analog conversion, chaotic systems, nonlinear dynamics, hybrid model, chaos, chaotic synchronization, digital model, analog model
Reference:
Perepelkina O.A., Kondratov D.V..
Development of a mathematical model for assessing the effectiveness of the implementation of an electronic document management and records management system in the executive bodies of state power
// Software systems and computational methods.
2018. № 4.
P. 114-123.
DOI: 10.7256/2454-0714.2018.4.28420 URL: https://en.nbpublish.com/library_read_article.php?id=28420
Abstract:
The relevance of the research topic is due to the fact that the activity of the authorities is to make management decisions in the framework of the implementation of their powers. The introduction of a document management and records management system is one of the priority tasks of the authorities, the successful implementation of which will allow for a transition to a higher quality level of their functioning.The effectiveness of this process is determined by the workflow and workflow system, which is the object of modeling. The purpose of the study is to develop a mathematical model for assessing the effectiveness of the implementation of an electronic document management and workflow system in government to improve performance. The results of the study were obtained on the basis of the use of the theory of system analysis, set theory, graphical models, and the model of system dynamics. The scientific novelty is connected with the development of a model for evaluating the effectiveness of introducing an electronic document management and workflow system in government for simulation modeling and forecasting of their main indicators.The main conclusions of the study are that the authors considered the construction of a mathematical model for evaluating the effectiveness of introducing an electronic document management system and office work in government using the Forrester system dynamics method. The developed model is written in the form of a system of differential equations and presented in the form of a Cauchy problem. According to the results of the study, it was found that the fourth order Runge-Kutta method is appropriate for solving this system, since, despite the increase in the volume of calculations, the fourth order method has an advantage over the first and second order methods, since it provides a small local error, which allows you to increase the step of integration and, consequently, reduce the calculation time.
Keywords:
Euler method, Cauchy problem, differential equations, system dynamics model, soft modeling models, electronic document flow system, electronic document flow, document flow, modified Euler method, Runge-Kutta method
Reference:
Galochkin V.I..
Search for the paths of the minimum total length in the graph
// Software systems and computational methods.
2018. № 2.
P. 60-66.
DOI: 10.7256/2454-0714.2018.2.25124 URL: https://en.nbpublish.com/library_read_article.php?id=25124
Abstract:
Author considers the problem of finding non-intersecting paths of the minimal total length from a given initial vertex to all other vertices on a weighted oriented graph k for non-negative arcs. The author shows that one can not use the "greedy" approach, that is, find the best way, remove the vertices of this path from the graph along with the incident arcs and repeat the search. The problem reduces to finding the shortest paths on an implicit graph of n ^ k vertices with some additional constraints, where n is the number of vertices of the original graph. The sparseness of the implicit graph allows us to use rational data structures, reducing the complexity of the path finder algorithm. The author executed the software implementation of the described algorithm. In the testing process, complete graphs were generated with the values of the arc weights at which the paths of the minimum total length consisted of a large number of arcs. For practical purposes and for computational possibilities, small values of k are of interest. In this case, it is correct to consider the value of k as a constant, and the complexity of the algorithm is estimated by the value O (n ^ (k + 1) log n). The necessary memory costs are O (n ^ k). The running time of the program on various tests does not contradict the obtained estimates of the complexity of the algorithm.
Keywords:
Dijkstra's algorithm, Shortest path, Length of a path, Sparse graph, Implicit graph, Weighted graph, Directed graph, Graph, Yen's algorithm, Complexity of algorithms
Reference:
Garmaev B.Z., Boronoev V.V..
The selection of continuous wavelet transform basis for finding extrema of biomedical signals
// Software systems and computational methods.
2018. № 1.
P. 45-54.
DOI: 10.7256/2454-0714.2018.1.23239 URL: https://en.nbpublish.com/library_read_article.php?id=23239
Abstract:
The authors consider the problem of choosing a wavelet for its application in a continuous wavelet transformation. The whole advantage of wavelet analysis lies in the possibility of choosing a basis among a large number of wavelets. The choice of the analyzing wavelet is usually determined by what information needs to be extracted from the signal under study. Each wavelet has characteristic features, both in time and in frequency space. Therefore, with the help of different wavelets, it is possible to reveal more fully and emphasize certain properties of the analyzed signal. The choice of the analyzing wavelet function for creating a basis for wavelet transform is one of the issues whose successful solution affects the success of using wavelet analysis in the problem being solved. Bypassing this question repels the beginners in this field of researchers from using wavelet analysis or significantly lends the field of its application. The choice of the wavelet function is especially important for a continuous wavelet transform, where the result of the transformation is a three-dimensional continuous wavelet spectrum. This makes it difficult to analyze it, which is often limited to a visual analysis of the projection of the wavelet spectrum on the scale-time axis. This also complicates the choice of the wavelet function, since when changing the wavelet in the projection of the wavelet spectrum, numerous changes that can not be analyzed sometimes occur.The purpose of this work is to show the method of substantiating the choice of the analyzing wavelet-function for its use in continuous wavelet transformation using the example of the problem of localizing the points of extrema of a digital signal. The work uses continuous wavelet transform. We consider wavelet coefficients on different scales for analyzing the changes not on the wavelet spectrum as a whole, but on its individual parts. The proposed technique shows an algorithm for analyzing continuous wavelet spectra with different wavelet functions in order to evaluate their suitability for searching for extrema. An important point in this technique is the transition from a visual analysis of three-dimensional wavelet spectra to a quantitative analysis of two-dimensional wavelet coefficients on different scales. Such a transition shows how the wavelet analysis works inside three-dimensional wavelet spectra (analyzed primarily visually) and automates signal analysis. This also allows us to numerically estimate the accuracy of finding extrema when using a particular wavelet. As a result, the article shows that the Haar wavelet is the most accurate in the problem of searching for signal extrema by means of continuous wavelet analysis.This method of choosing a basis can be used in problems where an acceptable quantitative estimate of the accuracy of the operation of a continuous wavelet transform is possible. This will allow the authors to analyze three-dimensional wavelet spectra not only qualitatively (visually), but also quantitatively.
Keywords:
arterial blood pressure signal, scale, extrema definition, basis selection, wavelet, wavelet analysis, modelling, wavelet Haar, continuous wavelet transform, method
Reference:
Gorbunova T.N..
Construction and investigation of a filtration model for a suspension in a porous soil
// Software systems and computational methods.
2018. № 1.
P. 55-62.
DOI: 10.7256/2454-0714.2018.1.25458 URL: https://en.nbpublish.com/library_read_article.php?id=25458
Abstract:
The subject of the study is the filtration problem, which describes the distribution of suspended solids in a loose porous soil. The urgency of constructing a model is determined by the need to strengthen loose soil by pumping under pressure a solution in the form of a suspension that, when hardened, forms a waterproof layer. The author's main goal is to construct a model for the motion of suspended particles of a suspension and colloids and to form a sediment in a porous ground for various filtration regimes. The distributions of solid particles of various sizes carried by the carrier liquid and settled on the framework of the porous medium are studied at different rates of precipitate growth.The one-dimensional filtration model with the particle retention mechanism includes a hyperbolic system of first-order equations with inconsistent initial and boundary conditions that generate discontinuous solutions. For polydisperse media, a modified mathematical model describing the competition of particles of different sizes for small pores is considered. The computer circuit for finding the numerical solution is constructed by the method of finite differences. The optimization of the method is used to improve convergence and reduce computation time. The main results of the study are a multi-particle model of solution filtration in a porous soil, taking into account the variety of sizes of suspended particles. A numerical calculation of the problem is performed for various blocking filter coefficients. Solutions are obtained with a discontinuity at the concentration front. Approbation of the found numerical solutions is carried out. Plots of the dependence of the concentrations of suspended and sedimented particles on time and coordinates are constructed.
Keywords:
finite-difference methods, numerical solution, mathematical model, suspended particles, porous medium, deep bed filtration, grout, discontinuous solutions, retained particles, Euler's method
Reference:
Perepelkina O.A., Kondratov D.V..
The use of "soft" mathematical modeling in developing a mathematical model for assessing the implementation of electronic document management systems and records management.
// Software systems and computational methods.
2018. № 1.
P. 63-72.
DOI: 10.7256/2454-0714.2018.1.25637 URL: https://en.nbpublish.com/library_read_article.php?id=25637
Abstract:
To date, the introduction of electronic document management and records management in the executive bodies of state power remains one of the most urgent tasks. The authors consider the use of "soft" mathematical modeling in developing a mathematical model for assessing the implementation of electronic document management systems and records management. By mathematical modeling, the work is understood as the process of establishing correspondence to the real object of a certain mathematical object, which is called a mathematical model. The object of modeling is the system of electronic document circulation and records management. The main objectives of modeling the study of document flow in the executive bodies of state power are: to increase the effectiveness of management activities, accelerate the movement of documents, reduce the complexity of processing documents.In this work, the authors used the formalization method, which consists in studying objects by displaying their content and structure in sign form. The main conclusions of the study are that, using "soft" mathematical modeling, a mathematical model for assessing the implementation of the electronic document management system and office work in the executive bodies of state power is constructed. After studying the constructed oriented graph, a system of differential equations is written that corresponds to the following rules: equations as many as there are unknown quantities, model parameters (system coefficients) are found as a result of application of computational methods.
Keywords:
the system of differential equations, criterial values system of electronic document circulation and Office work, the criteria for assessing the implementation of the system, mathematical modeling, model, electronic document management system, mathematical model, electronic document management, Document management, documentation.
Reference:
Butusov D.N., Tutueva A.V., Pesterev D.O., Ostrovskii V.Y..
The study of chaotic pseudo-random sequence generator on the basis of the ODE solvers
// Software systems and computational methods.
2017. № 4.
P. 61-76.
DOI: 10.7256/2454-0714.2017.4.24786 URL: https://en.nbpublish.com/library_read_article.php?id=24786
Abstract:
An approach to the selection of a finite-difference scheme of a chaotic pseudo-random sequence generator based on the use of step diagrams (h-diagrams) is proposed. As a test problem, a generator is considered based on the random Rössler system discretized by explicit, implicit and semiquant numerical methods of the first and second order of algebraic accuracy. The sequences generated by different variants of the generator are randomly checked by a battery of NIST statistical tests. Advantages of the proposed approach in the design of chaotic signal generators are shown, consisting in an essential (by an order of magnitude) acceleration of the device design time due to a new method of selecting the discretization step and the discrete operator. The effectiveness of using semi-implicit finite difference schemes in the generation of pseudo-random sequences by the method of numerical solution of chaotic differential equations is confirmed. The obtained results can be used in cryptography applications, in the design of secure communication systems, in solving problems of numerical simulation of dynamical systems and mathematical statistics.
Keywords:
bifurcation, semi-implicit method, ODE solver, Rossler system, step diagram, NIST tests, numerical integration method, dynamical chaos, pseudo-random numbers, discrete operator
Reference:
Trub I..
Numerical Modelling of the General Task of Bitmap-Indices Distribution
// Software systems and computational methods.
2017. № 3.
P. 35-53.
DOI: 10.7256/2454-0714.2017.3.22952 URL: https://en.nbpublish.com/library_read_article.php?id=22952
Abstract:
The subject of the research is the mathematical model in the form of recurrent integral relation system that describes the distribution of unit intervals where at least one random stream event with the arbitrary distribution function has happened. The author of the article examines numerous aspects of the numerical implementation of this system such as the Laplace transformation access method, numerical integration near discontinuity points, stability of calculations, validity check results, and particularities of real number machine arithmetic. Trub pays special attention to the connection between calculation data and semantics of the applied problem which solution is represented by these data. The methodology of the research is based on the probability theory (distribution types and qualities), numerical mathematics methods (numerical integration, interpolation, Laplace transformation), software implementation of the mathematical model and computing experiment conduction. The main conclusions of this research is the validity and numerical implementability of the mathematical model created by the author of the article as well as substantiation of the numerical solution for arbitrary distribution of the random stream events distribution. The novelty of the research is caused by the fact that the author develops a numerical solution of the bitmap-indices distribution for Weibull distribution, gamma distribution, logarithmically normal distribution, etc., and analyzes dependencies of different kinds such as the indices density function and average number of indices for the specified interval length.
Keywords:
binomial distribution, Simpson method, stability of calculations, discontinuity point, quadrature expressions, numerical integration, numerical methods, Laplace transformation, probability density function, improper integral
Reference:
Vagina M., Nigmatulin R..
Some Bounds for the Сompletion Time of Competing Jobs in Batch Planning
// Software systems and computational methods.
2017. № 3.
P. 54-60.
DOI: 10.7256/2454-0714.2017.3.24114 URL: https://en.nbpublish.com/library_read_article.php?id=24114
Abstract:
The subject of the research is the graphical models for solving the problem of minimizing the completion time of competing jobs in batch planning. Great interest in such problems is explained by numerous applications that arise in the theory of graphs, in the planning of production and transport routes, in the calculation of distribution, etc. In general, there is no effective algorithm for solving these problems. Particular attention is paid to the use of the coloring of the conflict graph, which makes it possible to build estimates of the minimum execution time for all jobs. The main research methods were methods of graph theory and computational experiment. Particular attention was paid to the graph coloring. Based on the coloring of the conflict graph, two-sided estimates of the minimum execution time of competing tasks for batch planning are constructed. All the estimates found are attainable. They are linear combinations of time slots for executing work packets. The coefficients of the linear combination are expressed through the different characteristics of the conflict graph, its subgraphs (for example, the chromatic number, the density of the graph).
Keywords:
completion time, full subgraph, chromatic number of graph, minimal coloring, conflict graph, graph theoretic model, batch planning, competing jobs, minimum time, work schedule
Reference:
Oleinikova S.A..
A mathematical model of a complex serving system with stochastic parameters and mutual dependency between the jobs.
// Software systems and computational methods.
2017. № 2.
P. 32-44.
DOI: 10.7256/2454-0714.2017.2.22457 URL: https://en.nbpublish.com/library_read_article.php?id=22457
Abstract:
The object of research in the work involves the complex service systems, which are characterized by mutual dependence between the works, their random duration and necessity of rationalize recourses in time. These features stipulate the development of a mathematical model which is meant to form the basis for the optimization task of forming a schedule. As the objective function, a generalized criterion that allows the most efficient allocation of resources over time is chosen. An important parameter of this model is the duration of the service. The assessment of this parameter is an important task, since it directly affects the accuracy of the developed schedule.The formation of a mathematical model is based on the apparatus of probability theory and the theory of project management. To find the duration of the service a cubic equation connecting the variance, the expectation and the mode of the cubic equation was obtained. As a result a mathematical model a mathematical model designed for describing multi-stage servicing systems with a stochastic duration of maintenance of mutually dependent works was found by the authors. This model may be used to describe a whole class of systems with the above features in the case where it is necessary to efficiently allocate resources over time. Novelty is present in the use of a generalized resource criterion in conjunction with resource and time constraints. The author also finds an evaluation of service time, which is distinguished by increased accuracy in comparison with existing analogues.
Keywords:
mathematical model, tochastic parameters, mutual dependence of works, random duration of service, resource criterion, time constraints, PERT, beta distribution , expected values, project management
Reference:
Toropov B.A..
Game-theoretic centrality in the graph nodes based upon the Shapley vector.
// Software systems and computational methods.
2017. № 2.
P. 45-54.
DOI: 10.7256/2454-0714.2017.2.22647 URL: https://en.nbpublish.com/library_read_article.php?id=22647
Abstract:
The object of studies concerns the methods for evaluation of the graph nodes. The author pays attention to the fact that the existing centrality metrics, such as level centrality, closeness centrality, interval centrality, own vector, etc., are sometimes not suitable for modeling the situations, when graph nodes are social object models, and therefore, they are capable of cooperation in order to achieve social goals. In this case game-theoretic centrality models (such as coalition games models) are better suited to reflect the modeling object. The methodology of the study involves the elements of graph theory, probability theory, as well as the apparatus for the analysis of social networks as a new independent scientific sphere. The key result of the stuy is that game-theoretic centrality based upon the Shapley vector is a flexible and mostly universal instrument for the social graph analysis. It allows to consider an unlimited set of quality characteristics of the graph nodes, as well as their topological qualities in any combination in order to evaluate the nodes.
Keywords:
algorithm, permutation, Shapley vector, coalition games, group centrality, level centrality, node centrality, social network, social graph, graph
Reference:
Grundel L.P., Biryukov V.V..
Application of fuzzy modeling function to determine key performance indicators
// Software systems and computational methods.
2016. № 3.
P. 268-286.
DOI: 10.7256/2454-0714.2016.3.68107 URL: https://en.nbpublish.com/library_read_article.php?id=68107
Abstract:
The subject of research is the development of key performance indicators of tax consultants. Object of research are the key performance indicators. It is proved that the key performance indicators of the business process tax advice is a measure to achieve the strategic objectives of the company, with which you can evaluate the activities and improve the performance of tax consultants, as well as to develop optimal approaches to the professional development of staff. It clarified that each figure should be: (1) clearly defined; (2) is achievable; (3) comparable; (4) contribute to the motivation of the personnel; (5) is the basis for the analysis.In this paper, using econometric and statistical methods, as well as using the program «Mathlab» (application «Fuzzy Logic») performance indicators tax consultants decomposed under the Balanced Scorecard and are considered by categories: (1) finance; (2) markets and customers; (3) business processes; (4) training and development.Decompose the evaluation parameter "Finance" on several input parameters (1) the level of income; (2) the level of costs; (3) the level of intangible assets (goodwill).Decompose option "Markets and customers" on the following inputs: (1) the level of customer savings (the base value for the customer); (2) the level of image and reputation; (3) quality of service (compliance with the law, the level of efficiency); (4) to attract customers; (5) retention of customers.Decompose option "Business Processes" on the following inputs: (1) maintain the level of competence (knowledge of the legislation, industry knowledge, experience); (2) the level of maintenance of lobbying the interests of taxpayers; (3) the level of effectiveness of the internal quality control; (4) the level of understanding of customer needs, the effectiveness of communication with customers; (5) The effectiveness of the internal exchange of information; (6) the level of compliance with the requirements of the services market; (7) the level of costs.Decompose option "Training and education" on the following input parameters: (1) the level of provision of search and recruitment of professional staff; (2) the professional qualifications of staff; (3) quality control and knowledge management; (4) the level of compliance with corporate and personal goals.The evaluation of the linguistic variables for the index of "Finance". To solve the problem of fuzzy set of input rules. The question of the application of fuzzy modeling functions in the selection of key performance indicators.
Keywords:
fuzzification of input variables, fuzzy inference rules, thermae, estimation parameters, function fuzzy modeling, Key Performance Indicators, accumulation, defuzzification, fuzzification, aggregation of data
Reference:
Zaretskiy A.P., Kuleshov A.P., Gromyko G.A..
Application of Gaussian functions to Mathematical Modeling of Endocardial Signals
// Software systems and computational methods.
2016. № 1.
P. 49-57.
DOI: 10.7256/2454-0714.2016.1.67597 URL: https://en.nbpublish.com/library_read_article.php?id=67597
Abstract:
The subject of the research presented by the authors of the article is the mathematical models of endocardial signals from the main electrophysiological parts of the heart with the specified amplitude-time characteristics of information fragments. The authors of the article offer the mechanism for extending the mathematical models in order to generate normal and/or pathological states of the atrioventricular system conducting endocardial electrical impulses. The article contains the results of the comparison of modeled and actual endocardial signals recorded in the course of minimally invasive eletrophysiological examination. These results demonstrate that the designed models are appropriate and applicable for modeling endocardial signals coming from different parts of the heart. The research method used by the authors is the mathematical modeling using Gaussian functions approximating set elements of the endocardial signal coming from different parts of the intracardial space. The main conclusions of the research are the following: - the authors have proved that Guassian functions are applicable for the aforesaid purposes; - they have also described possible modifications of used functions for modeling signals from other endocardial spheres such as the left atrium pulmonary vein entry, mitral aortal zone and other zones clinical electrophysiologists are particularly interested in; the authors have also demonstrated how the research results can be implemented in the form of the hardware and software complexes using the modern methodologies for assessing efficiency of treating complex heart rhythm disorders.
Keywords:
radio frequency ablation, asymmetrical function, heart conduction system, electrophysiological study, Gaussian functions, endocardial signals, atrial fibrillation, heart process modeling, atrioventricular conductin
Reference:
Mustafaev A.G., Mustafaev G.A..
Mathematical modeling and numerical calculations of resonant tunneling effect
// Software systems and computational methods.
2016. № 1.
P. 58-63.
DOI: 10.7256/2454-0714.2016.1.67598 URL: https://en.nbpublish.com/library_read_article.php?id=67598
Abstract:
The research is devoted to one of the physical nanoelectronic effect – resonant tunneling. The authors provide numerical calculations for constructing the MIS-structure based diod and modeling its characteristics. The metal-oxide-silicon semiconductor in the severe depletion mode next to the doped semiconductor has a similar structure. The authors establish the MIS-structure energy band diagram, define energy levels and wave functions of an electron in the quantum well and during tunneling and calcuate the probability of tunneling based on the amounnt of the voltage applied. In the course of calculations the authors use the PTC Mathcad Prime 3.1 visual environment for mathematical modeling and technical computing. The results of the computer modeling allow to define external limit voltage including the amount of voltage that leads to the dielectric breakdown. In addition, the authors define the qualitative dependency between the MIS-structure voltage and the height and width of the energy barrier. The model developed by the authors takes into account the joint influence of several factors which is proved by the coordination of recorded current voltrage characteristic with the experimental characteristics.
Keywords:
nanoelectronics, wave function, MIS structure, energy band diagram, quantum well, modeling, resonant tunelling diod, semiconductor device, quantum transport, quantum effect
Reference:
Myl'nikov L.A., Fayzrakhmanov R.A..
The role of simulation and computer software systems for management decision making at realization of trade innovative projects in production and economic systems
// Software systems and computational methods.
2015. № 4.
P. 390-406.
DOI: 10.7256/2454-0714.2015.4.67456 URL: https://en.nbpublish.com/library_read_article.php?id=67456
Abstract:
The subject of research is the decision support process in production and economic systems caused by the implementation of trade innovative projects and related economic and mathematical models and management methods. The object of the research is production and economic industrial enterprise system focused on implementation of trade innovative projects. The special attention is paid to the issues of design and usage of information systems based on simulation modeling for decision making support and studies of trade innovative projects at different stages of development. The study is based on theory and methodology of decision-making and management of social and economic systems, the provisions of the management theory, general systems theory, control theory, system analysis and operations research, methods of economic-mathematical modeling and simulation, statistical analysis, micro-economic forecasting. A special contribution of the study is in demonstrating that the decision making support in the management of the trade innovative projects in the industrial and economic systems requires the use of simulation modeling techniques based on econometric models and methods involving information systems. Also the requirements for the information system structure for emerging management challenges are developed.
Keywords:
management, production system, software, production planning, innovation project, algorithm, decision support, simulation model, method, mathematical model
Reference:
Mironova M.M., Kulifeev Yu.B..
Simulation software package for controlling the regimes of horizontal flight of an unmanned aircraft
// Software systems and computational methods.
2015. № 3.
P. 293-310.
DOI: 10.7256/2454-0714.2015.3.67273 URL: https://en.nbpublish.com/library_read_article.php?id=67273
Abstract:
The subject of the study is to optimize the management of flight speed and altitude of the aircraft, implementing a coordinated deflection of controls using special algorithmic software for flight control and navigation systems of automatic control. The authors solve the problem of creating a simulation software for controlling the regimes of horizontal flight of an unmanned aircraft, which allows to validate (to carry out a comprehensive study for a wide range of conditions, conditions and restrictions) control algorithms altitude and airspeed with the help of an integrated management system. The results are presented in relation to the adaptive control algorithm speed and altitude UAV aircraft type having versatility. The research methodology includes methods of mathematical modeling, management of complex systems, inverse problems of dynamics, system simulation, adaptive control, flight dynamics, aerodynamics. The main result of the study is a program complex of modeling management regimes of horizontal flight of an unmanned aircraft, which has been used to validated control algorithm level flight of an unmanned aircraft, providing an increase in the speed control loop speed and altitude, reducing fuel consumption, due to the desire of the current rate closer to the optimal value due to the withdrawal of the aircraft at the optimum altitude.
Keywords:
speed control loop, autothrottle, longitudinal control channel, flight control and navigation system, unmanned aerial vehicle, flight control, flight control algorithm, flight simulation, the inverse problem of dynamics, modeling software package
Reference:
Oleynikova S.A..
Recursive numerical method for the experimental evaluation of the distribution law of the duration of the project in network planning and management tasks
// Software systems and computational methods.
2015. № 1.
P. 69-78.
DOI: 10.7256/2454-0714.2015.1.66222 URL: https://en.nbpublish.com/library_read_article.php?id=66222
Abstract:
In this paper a problem of network planning and management with a random duration of individual operations is considered. The subject of the study is the law of distribution of the random variable which describes duration of the project. The aim is to estimate such law. The urgency of this problem is related to the need to improve the accuracy of the known existing assessments which do not take into account the specifics of the distribution law of separate works determining the project. The main difficulty of the practical solution of this problem is the need to calculate the multiple definite integral, wherein the number of individual integrals not known in advance and determined by the number of works that make up the critical path of the project. As a result, the numerical method based on recursion is proposed, which allows to numerically estimate the desired distribution law. Scientific novelty of the results is in obtaining estimates of the distribution law of the duration of the project that improves positional accuracy over the existing analogues. Without loss of generality developed a recursive algorithm can be used for a wide class of problems in which the unknown distribution of the sum of random variables with known distributions of the individual terms.
Keywords:
project management, the sum of beta-values, beta-distribution, distribution law, duration of the project, probabilistic and temporal characteristics, mathematical model of risks, PERT, recursion, numerical method
Reference:
Chernukha V.N., Kasterskiy S.M., Aprel’skiy E.N., Zamyatin V.G., Kurenkov A.S..
Mathematical modeling of the pressure regulator in the aircraft cabin
// Software systems and computational methods.
2014. № 4.
P. 472-483.
DOI: 10.7256/2454-0714.2014.4.65865 URL: https://en.nbpublish.com/library_read_article.php?id=65865
Abstract:
the authors in detail review pneumatic pressure regulator, which is has essential
value for on-board systems of aircraft oxygen supply in order to build a mathematical model
of the regulator. The subject of the study is building a mathematical model of pneumatic
pressure regulator considering factors having a significant impact on vital functions and
work of the flight crew. This ensured the construction of adequate mathematical models,
describing the functioning of pneumomechanical pressure regulator in the cabin of aircrafts,
which are essential for safety in high-altitude flights. Mathematical modelling of pressure
regulator considering factors having a significant impact on vital functions and work of the
flight crew: factors that characterize the atmospheric space as a habitat for the crew; factors
related to the dynamics of flight; factors related to various types of emergencies in case of
which the flight becomes impossible. The study of the mathematical model built allowed the
authors to identify that: dry friction of the main valve significantly impairs the dynamics of the
regulator, but has almost no effect on its static error; dry friction of the control valve significantly
increases the static error of the controller, but has almost no effect on the dynamics of the
regulator; the presence of self-oscillations of small amplitude and high frequency together
with the elimination of the dead zone of the controller changes the structure of dry friction,
removing the dry static friction etc. The results obtained are essential for the design of aircraft
life support systems.
Keywords:
life support system, high-altitude flights, aviation cybernetics, pneumatic mechanical regulator, preasure regulator, model of dry friction, model of pneumatic regulator, computational experiment, transition functions, oxygen supply system
Reference:
Egoshin A.V., Motorov M.N..
Coordinate calculations in GPS and GLONASS navigation systems based on the
measuring of time of satellite signals arrival
// Software systems and computational methods.
2014. № 2.
P. 217-227.
DOI: 10.7256/2454-0714.2014.2.65264 URL: https://en.nbpublish.com/library_read_article.php?id=65264
Abstract:
in navigation there are two systems of radio navigation: NAVSTAR or GLONASS.
Both systems use same approach – determination of the distance from satellites to the
object. Distance measuring is performed by measuring the propagation time of the satellite
signal to the object. For that purpose the receiver generates pseudorandom code at the
exact same moment, when the satellite transmits the signal. On the receiving of the signal
by the receiver, the receiver calculates the propagation time as the difference between time
of the pseudorandom code generation and the rime of signal receiving. This rises the need
to synchronize clocks of the satellite and the receiver. Due to hardware limitations not all
receivers can be synchronized with the satellite’s clock. The method presented in the research
is based on the analysis of radio navigation systems, principles of functioning and existing
techniques of defining object’s coordinates. The article proposes a new way of determining
the coordinates, based on the measuring the time difference of signals arrival from different
satellites. Because of this there is no need in strict synchronization of the receiver clock and
satellite clock, detecting the moment of signal transition from satellite to receiver and use of
corrective systems.
Keywords:
GPS, delay, trilateration, coordinate determination, satellite navigation, GLONASS, corrective systems, radio navigation, WAAS, range difference method
Reference:
Minitaeva A.M..
Mathematical model of the operator of human-computer system
// Software systems and computational methods.
2014. № 1.
P. 70-79.
DOI: 10.7256/2454-0714.2014.1.64045 URL: https://en.nbpublish.com/library_read_article.php?id=64045
Abstract:
development of local systems of vehicle automation in the late XX – early XXI
century along with increasing use of microprocessors created the prerequisites for vehicle
control based on the personal characteristics of the operator. While creating new control
algorithm on that basis it is rational to apply Feldbaum Dual control theory, that refers to
using the capacity of the control system to study specifics of control actions of a particular
operator and considering them in correlation with environment while selecting an algorithm for
control system. It becomes possible not only in simulators, but also by adjusting the variables
of operator’s actions directly. However the operator is represented by a model in which not
every variable of watchers’ classes is measurable. The author also notes that according to
the modal theory the base for simulation is in taking the dominant roots in consideration of
main psychophysiological features of operator and the significant nonstationarity of the main
operator’s parameters. This becomes possible as a result of procedures of current dynamic
identification of output variables of operator during the control process. Taking into account
the complexity of operator’s inner processes and using the features of decompositions the
initial steps of description may be simplified to forming sub-models in operator’s subsystem:
the person; inner model of object being controlled; inner model of environment and its
state. In order to solve the problem of creating a complex human-computer interface author suggests to use Feldbaum Dual control principle. This kind of works is based on the capabilities
of realization provided by existing automation devises and computer equipment. The novelty
and perceptiveness of the suggested approaches is defined by the results expected on the
development and implementation of the system:
1) creating of automated informational system of control and diagnostics of
psychological and physiological safety of computers consisting of a set of measures such
as: preventive measures, controlling lapses and deviations in the health of the operators,
organization of medical examinations with key factor with emphasis on the psychological
and physiological factors.
2) building control computer systems with artificial intelligence and devices for
information exchange via the human-machine interface for intellectualization of the
interaction between man and computer.
Keywords:
dual control principle, human-computer interface, operator, mathematical model, ergatic control system, artificial intelligence, human-computer system, multidimensional scaling unit, executive system, base of knowledge
Reference:
Shumskii L.D..
Semantic tracing of information processes
// Software systems and computational methods.
2014. № 1.
P. 80-92.
DOI: 10.7256/2454-0714.2014.1.64046 URL: https://en.nbpublish.com/library_read_article.php?id=64046
Abstract:
the questions of using formal modeling tools for simulating processes of
different categories (the information business processes in particular) are currently under a
lot of attention. However, in general the modeling is based on graphs and network solutions,
based on the state diagrams, such as Petri nets, graphs representing network chain and other
document oriented models, such as UML or IDEFx. The purpose of this article is to present the
achievements in the theory of business processes, oriented on such models that can be applied
in more strict symbolic systems giving the opportunity to automate model validation, properties
detection and connection between the model and technological means of implementation.
The author suggests using pi-calculus as symbolic mean of process simulation. In that formal
model a process is represented as a term of calculus, which is described by reducing of this term
in accordance with selected semantics. This calculus was developed to describe interaction of
several system within the frame of processes with changeable structure. The article proposes
distinctive approach to the description of tracing: the author shows a way of making process
execution logs, combining them into journals and reviews general requirements to journaling
process. The article describes the appliance of process mining to the processes modeled
using pi-calculus. Implementation of proposed approaches to the process modeling and their
execution tracing provides, as compared with the analogues, more abilities to evaluate the
adequacy and accuracy of the model built, simplifies system expansion via adding new criteria,
ease keeping and interpreting execution logs of the process that corresponds to the model.
Keywords:
process trace, business process modeling, process mining, pi calculus, lambda calculus, ABC, interactive system calculus, business process execution semantics, formal model interpretation, model evaluation
Reference:
Denisenko V.A., Sotskov V.A..
Development of parallel implementation of the modified Frank- Lobb algorithm for studying of
conductivity of defect 2D lattice with ties division
// Software systems and computational methods.
2013. № 4.
P. 363-369.
DOI: 10.7256/2454-0714.2013.4.63911 URL: https://en.nbpublish.com/library_read_article.php?id=63911
Abstract:
the percolation theory in sufficient details studied how the problem both nodes and
links and the mixed problem of percolation theory. However, several experimental processes
show the probability variation of the horizontal and vertical communication in the communication
lattice structure with defects. In the real physical models such processes may occur, for
example, when spraying the conductive material onto the inclined surface or during gradual
solidification of the insulating matrix, which contains the micro-charged microparticles of a
conductor, and which is placed in the electric or magnetic fields, etc.. In addition, we can expect
that the presence of various defects in the structure aff ects both the mechanical and electrical
properties of materials. Unfortunately, due to considerable experimental difficulties it is not
always possible to determine the exact quantitative relationship between the number of defects
and physical parameters. Modeling of the relation between physical parameters and the number
of defects for anisotropic links is an important scientific problem. The number of such problems
is large and can be of a great practical value in the case of numerical solution of such problems.
The aim of this work is to study computer simulation of the combined problem of nodes and
links with the division of the probabilities of formation of horizontal and vertical relations and
the possibility of adding the Schottky defects into the lattice. The results of the research should
be the numerical values of dependencies of the conductivity G of the 2d square grid on the
values of probabilities: of the vertical connection P1, horizontal connection P2 and defects N.
Keywords:
Software, percolation, conductivity, modeling, cluster, high-performance computing, Open MP, MPI, parallel computing
Reference:
Grishentsev A.Yu., Korobeinikov A.G..
Problem definition for optimization of distributed computing systems
// Software systems and computational methods.
2013. № 4.
P. 370-375.
DOI: 10.7256/2454-0714.2013.4.63912 URL: https://en.nbpublish.com/library_read_article.php?id=63912
Abstract:
the article describes a model and problem definition for optimization of distributed
computing systems. The results of the study are in good accordance with Amdahl’s law and
together with the game theory and optimizations allow finding the most successful solutions
in terms of efficient use of computing resources when designing or upgrading the distributed
computing systems. The article discusses the threaded model of distributed computing systems
of continuous time. The disadvantage of this model is the possibility of simulating only
threaded distributed computing system while the case of transferring of blocks of data requires
the discrete time system model. Modern distributed computing systems may contain multiple
separate computing units linked through communications network and located in diff erent parts
of the Earth and near-Earth space. The authors review block model of distributed computing
discrete time system. Such model allows examining both threaded and blocks data processing
and considering time of delay needed for data synthesis and transfer. The solution of optimization
task can be found by sequential search with appliance of game theory and optimizations for computational tasks, resource nodes.
Keywords:
threading model, distributed computing system, optimization, Amdahl’s law, discrete time system, direct graph, graph node, block model of distributed computing systems, computation channel time, game theory
Reference:
Galanina N.A., Dmitriev D.D., Akhmetzyanov D.I..
Goertzel algorithm for signals spectral analysis
// Software systems and computational methods.
2013. № 4.
P. 376-383.
DOI: 10.7256/2454-0714.2013.4.63913 URL: https://en.nbpublish.com/library_read_article.php?id=63913
Abstract:
the article presents results of software realization of Goertzel algorithm for determining
the phase shift between two sinusoidal signals. The authors show practical appliance of
Goertzel algorithm for calculation of active component R of the complex impedance filter Z.
It is noted that the Goertzel algorithm is implemented in form of the filter with infinite impulse
response (IIR-filter) of the second order with two real feedback coefficients and one complex
coefficient in the forward loop. Modeling was carried out in the MathCad computer algebra system.
The article stated that the Goertzel algorithm allows to efficiently calculate fixed spectral
counts of the discrete Fourier transform without the calculation of the Fourier transform itself.
The authors point out that the Goertzel algorithm proved to be eff ective for computation of the
spectral components at high sampling rates and correct calculation of sampling counts. In the
Mathcad the authors built a model showing the working capacity of this algorithm. Using this
model the optimal parameters for further practical implementation of the Goertzel algorithm
were found.
Keywords:
Goertzel algoritm, phase shift, sinusoidal signals, complex impedance, feedback, direct link chain, Mathcad, modeling, sampling, fixed spectral counts
Reference:
Korobeinikov A.G., Ismagilov V.S., Kopytenko Yu.A., Ptitsyna N.G..
Measuring systems for electric magnetic fields of electric carts for electromagnetic safety analysis
// Software systems and computational methods.
2013. № 4.
P. 384-396.
DOI: 10.7256/2454-0714.2013.4.63914 URL: https://en.nbpublish.com/library_read_article.php?id=63914
Abstract:
of road transport is currently one of the priority areas in science, technology
and engineering. One of the most complicated tasks for all manufactures of electric
vehicles is the problem of providing electromagnetic safety of users of the cars and ensuring the
electromagnetic compatibility of all devices located in the vehicle. In addition there is a concern
among citizens and media about the possible health risks and traffic safety due to eff ects
of electromagnetic fields, generated by strong currents in the power lines and cables of electric
vehicles. It is also noted that such currents and the magnetic fields generated by them may also
pose a risk for various electromagnetic compatibility of electrical and electronic equipment
of the electric car. In this regards the measuring and evaluation of magnetic fields along with
the determination of their topology in the electric car in real time is highly important task. The
article present a comparative analysis of the methods for the detection of magnetic fields in an
electric car appropriate to identified specific features of these fields. The authors review the task
of defining the main characteristics of magnetic field in the electric car. Based on these characteristics
the authors concluded that the most perspective magnetic field sensors for the purposes
of electromagnetic safety in the electric car are traditional geophysics magnetostatic sensors and modern sensors based on giant impedance.
Keywords:
magnetic field, magnetic field detectors, measuring of the magnetic field, road transport, hybrid car, electric car, electromagnetic fields, ecology, electromagnetic safety, electric cart
Reference:
Korobeinikov A. G., Ismagilov V. S., Kopytenko Yu.A., Petrishchev M. S..
Processing of experimental studies of the Earth crust geoelectric structure based on the analysis of the phase
velocities of extra-low-frequency geomagnetic variations
// Software systems and computational methods.
2013. № 3.
P. 295-300.
DOI: 10.7256/2454-0714.2013.3.63835 URL: https://en.nbpublish.com/library_read_article.php?id=63835
Abstract:
the article presents the results of experimental study of geoelectric structure of the Earth crust held in
Karelia. Simultaneous measurements of extra-low-frequency variations of the Earth electromagnetic field were
held in August 2012 near Tolvuya village (Karelia) to determine the depth of crustal conductivity anomalies.
The choice of this exact place was made due to the fact that in this area there are outputs of highly conductive
shungite rocs on the Earth” surface. For the research five highly sensitive trinary geomagnetic-variation systems
GI–MTS-1 where placed at a distance of 10.5 km from each other. The data was recorder at a frequency of 50
Hz. At all five systems the data processing was performed in two ways using magnetotelluric method (DOLE)
and phase-gradient sensing to analyze the changes in apparent resistivity with increase of the depth. The phasegradient
sensing method was developer in the St. Petersburg branch of the N. V. Pushkov Institute of Terrestrial
Magnetism, Ionosphere and Radio Wave Propagation of the Russian Academy of Sciences and it requires at least
three geomagnetic-variation systems placed in a triangle on the earth’s surface. Comparison of the interpretation
results of the magnetotelluric method and method of phase-gradient sensing showed their good match.
Keywords:
geoelectrical structure, Earth crust, extra-low-frequency variations, highly conductive shungite rock, stationary recording magnetometer, registration site, geomagnetic perturbation, medium resistivity, phase velocity, conductive bed
Reference:
Burda A. G., Metelskaya E. A..
Mathematical modeling of expanded reproduction processes and computational experimentation in
manufacturing parameters of peasant” (farmers’) farms at different rates of accumulation
// Software systems and computational methods.
2013. № 3.
P. 285-294.
DOI: 10.7256/2454-0714.2013.3.63836 URL: https://en.nbpublish.com/library_read_article.php?id=63836
Abstract:
the article shows mathematical approaches to modeling of expanded reproduction processes in
peasants’ (farmers’) farms, formulates the optimization task of defining rational parameters in crop farming,
presents symbolic economic-mathematical model of optimization of peasant farming parameters, that reflects
production, economic and technological conditions and requirements on the use of labor and land resources, on
crop rotations, on defining volumes of production and sales in terms of volume and value, on demand for basic
production assets, on determination of its wear and calculation of the amount of its depreciation, on insurance
premiums and payments on short-term and long-term loans. The authors provides results of computational
experiments on the six variants of solving the optimization problem, each represents a relatively independent
task, reflecting a new production and economic situation. The article defines optimal parameters of a particular
farm at various rates of accumulation, shows calculations of transition period to the optimal variant of use of
productive resources, makes suggestions on the use of economic-mathematical methods and models, carrying
out the computer experiments in economics and farm management on an extended basis.
Keywords:
mathematical model, production parameters, computational experiment, the rate of accumulation, optimization, farm, expanded reproduction, task, optimality criterion, objective function
Reference:
Olzoeva S.I..
The method of automated classification in the distributed system modeling
// Software systems and computational methods.
2013. № 2.
P. 160-163.
DOI: 10.7256/2454-0714.2013.2.63020 URL: https://en.nbpublish.com/library_read_article.php?id=63020
Abstract:
the article deals with the problem of decomposition of simulation models for organization of the
distributed modelling of complex systems on multiprocessor computing systems. To speed up the process of
modelling the modelling program shall be divided into parts for parallel execution on different processors.
The author offers the approach to the use of an automatic classification method for automation the parallelization
of simulation programs.
Keywords:
Software, simulation modeling, complex systems, multi-processor computing systems, decomposition of simulation model, automatic classification, distributed modeling, acceleration of computation, parallel computing, modeling programs.
Reference:
Olzoeva A.G..
Issues of building computation clusters
// Software systems and computational methods.
2013. № 2.
P. 164-167.
DOI: 10.7256/2454-0714.2013.2.63021 URL: https://en.nbpublish.com/library_read_article.php?id=63021
Abstract:
the article is devoted to a topical issue of development of the education in the field of the supercomputer
systems, raises questions and suggests constructive solutions for building multiprocessor computation
clusters as the initial phase in the realization of program of training specialists in the area of high performance
computing. Author gives an example of building a cluster on the specific equipment and shows the
results of testing the cluster during the program of modeling of distributed automated control system.
Keywords:
Software, supercomputing technology, computing clusters, high performance computing, parallel programming, specialization on parallel computing, mathematical modelling, numerical experiment, mass parallelism.
Reference:
Olzoeva S.I., Telina S.V., Balbarova D.G..
The method of calculation of buffer storage capacity in
the design of commutation systems
// Software systems and computational methods.
2013. № 2.
P. 168-170.
DOI: 10.7256/2454-0714.2013.2.63022 URL: https://en.nbpublish.com/library_read_article.php?id=63022
Abstract:
the article describes the method of calculation of buffer storage capacity of commutation nodes
for computer network that does not require setting the exact values for functions of distribution of incoming
messages flow and duration of its processing. The proposed method makes it possible to calculate the
required value of the buffer storage capacity using the average value of the intensity and duration of the
incoming flow of messages.
Keywords:
oftware, computer networks, switching systems, the probability of message loss, buffer storage, information packets, the intensity of flows, packet handling, overflow, rectangular pulses.
Reference:
Sidorkina I.G. Shumkov D.S.
Piecewise-linear approximation in solving problems of retrieving
data
// Software systems and computational methods.
2013. № 2.
P. 171-175.
DOI: 10.7256/2454-0714.2013.2.63023 URL: https://en.nbpublish.com/library_read_article.php?id=63023
Abstract:
forecasting is one of the main issues in the analysis of the time series. The goal is to determine
future behavior of a time sires by its know past values. In this article the author proposes a method of time series prediction based on the idea of allocating basic patters (templates) from the input data allowing
to define the internal rules of the studied series. Currently on of the approaches in the field of time
series prediction is the Data Mining (or “data excavation”) system. This is due to the fact that classical
methods, based only on the linear (ARIMA) and non-linear (GARCH) models of prediction can’t provide
the required accuracy. Usage of the methods developed with this technology makes it possible to increase
the performance of prediction and reveal hidden patterns in the studied time series.
Keywords:
Software, time series, approximation, piecewise-linear approximation, forecasting, data excavation, basic patterns, patterns, local extremes, algorithm.