Pekunov V.V. —
Object-transactional Extension of Cilk++
// Software systems and computational methods. – 2022. – ¹ 3.
– P. 28 - 34.
DOI: 10.7256/2454-0714.2022.3.38823
URL: https://en.e-notabene.ru/itmag/article_38823.html
Read the article
Abstract: In this paper, we consider the problem of developing compact tools that support programming in dynamic transactional memory, implying operational generation of transactional pages, for the Cilk++ language. It is argued that such an implementation requires weakened transaction isolation. The current state of the problem is analyzed. It is noted that the existing solutions are quite cumbersome, although they allow you to work with complex data structures such as lists and trees. It is argued that it is necessary to develop new solutions in the style of minimalism based on the use of specialized classes (generating transactional pages; implementing consistent transactional variables) in combination with a set of keywords characteristic of Cilk++. Appropriate new solutions are proposed. New syntax elements are introduced, implemented using language extension tools specific to the Planning C platform. The semantics of new language elements is described. It is noted that, unlike analogues, the developed tools allow declaratively to "build" transactions into a network (network schedule of work), which determines the order of execution of transactions and the potential for parallelism that exists at the same time. The proposed approach was tested on the example of the task of constructing a histogram. It is also mentioned about the successful solution, using the developed tools, of the problem of training an artificial neural network by the method of error back propagation and the problem of integer linear programming by the method of branches and boundaries.
Pekunov V.V. —
New built-in tools for extending the Planning C language
// Software systems and computational methods. – 2022. – ¹ 1.
– P. 32 - 41.
DOI: 10.7256/2454-0714.2022.1.37240
URL: https://en.e-notabene.ru/itmag/article_37240.html
Read the article
Abstract: In this paper, the problem of developing language extensions of Planning C (a dialect of C++) is considered. The review of existing external programs and solutions built into languages that allow translating new constructions introduced into the language into the output code is carried out. Based on the analysis, it is concluded that the most natural solution built into the language will be some combination of improved regular expressions (to highlight new constructions) with code generators based on procedural and syntactic macros. At the same time, it is advisable to use elements of direct logical programming (both in macros and in regular, more precisely, regular-logical expressions). The proposed approach makes it possible to more flexibly allocate replaceable constructs in comparison with template approaches and more simply replace them with output code in comparison with approaches based on manipulations with the syntax tree. The syntax and semantics of the proposed solutions are described. A preprocessing scheme is proposed that implements the selection of initial constructions by scanners (groups of parameterized regular logical expressions) and their replacement with output code implemented by deductive macromodules (with possible multiple matching). This scheme allows you to work with arbitrary input and output syntaxes and provides prompt input of new constructions into Planning C, which is especially valuable, for example, when prototyping new extensions. The paper contains information about the successful testing of the proposed approaches (on the development of a number of syntactically non-trivial extensions of Planning C).
Pekunov V.V. —
Design of K-W-NET model of turbulence based on K-W/V2-F models with the neural network component
// Software systems and computational methods. – 2021. – ¹ 3.
– P. 52 - 58.
DOI: 10.7256/2454-0714.2021.3.36054
URL: https://en.e-notabene.ru/itmag/article_36054.html
Read the article
Abstract: The subject of this article is the models of turbulence based on introduction of neural network components into the widespread standard semi-empirical models. It is stated that such technique allows achieving significant acceleration of calculation while maintaining sufficient accuracy and stability, by training neural network components based on the data acquires with the use of fairly accurate and advanced models, as well as replacing and complementing separate fragments of the initial models with such components. An overview is give on the existing classical approaches towards modeling of turbulence, which allows determining the V2-F model suggested by Durbin as one of the most advanced, and thereby promising, with regards to subsequent neural network modifications. The author offers the new model of turbulence based on K-W models paired with a neural network component trained in accordance with the V2-F Durbin model. All necessary ratios are provided. The properties of the obtained model are examined in terms of the numerical experiment on the flow over of a single obstacle. The results are compared with data acquired from other semi-empirical models (K-E, K-W), as well as via direct neural network model. It is demonstrated that the proposed model, with less computational labor output in comparison with other models (excluding direct neural network, which, however, is less accurate), provides high precision close to precision of the Durbin model.
Pekunov V.V. —
Improved CPU load balancing for numerical solution of the tasks of continuous medium mechanics complicated by chemical kinetics
// Cybernetics and programming. – 2021. – ¹ 1.
– P. 13 - 19.
DOI: 10.25136/2644-5522.2021.1.35101
URL: https://en.e-notabene.ru/kp/article_35101.html
Read the article
Abstract: This article explores certain aspects of the process of numerical solution of the tasks of continuous medium mechanics in the conditions of ongoing chemical reactions. Such tasks are usually characterized by the presence of multiple local areas with elevated temperature, which position in space is relatively unstable. In such conditions, rigidly stable methods of integration with step control, which in the “elevated temperature” areas that have higher time input comparing to other areas. In terms of using geometric parallelism, this fact leads to substantial imbalance of CPU load, which reduces the overall effectiveness of parallelization. Therefore, this article examines the problem of CPU load balancing in the context of parallel solution of the aforementioned tasks. The other offers a new modification of the algorithm of large-block distributed balancing with improved time prediction of the numerical integration of chemical kinetics equations, which is most effective in the conditions of drift of the areas with “elevated temperatures”. The improvement consists in application of the linear perceptron, which analyzes several previous values of time integration (the basic version of the algorithm uses only one previous spot from the history of time integration). This allows working in the conditions of fast and slow drift of the areas with “elevated temperatures”. The effectiveness of this approach is demonstrated on the task of modeling the flow-around the building with high-temperature combustion on its roof. It is indicated that the application of modified algorithm increases the effectiveness of parallelization by 2.1% compared to the initial algorithm.
Pekunov V.V. —
Testing of the droplet phase model during the experiment on modeling the formation of acidulous cloud
// Software systems and computational methods. – 2021. – ¹ 1.
– P. 46 - 52.
DOI: 10.7256/2454-0714.2021.1.35104
URL: https://en.e-notabene.ru/itmag/article_35104.html
Read the article
Abstract: The problem of numerical modeling of the formation (as a result of condensation growth and droplet collisions) and development of primary acidulous cloud considers various factors: the presence of temperature gradients, turbulence, direct solar radiation heating the air and walls of buildings, diffuse solar radiation (which describes radiation cooling), transfer of gaseous pollutants and their absorption by droplets. The author earlier formulated the corresponding complex mathematical model that takes into account the aforementioned factors. This article sets the task of testing the droplet component of this model through numerical modeling of the processes in the emerging cloud, with subsequent comparison of the results with theoretical and empirical correlations. The author obtained the new results of numerical modeling of acidulous cloud in the air over a vast urban area with high-density development on the basis of the comprehensive mathematical model that takes into account the above listed factors and relies on the interpolation-sectional submodel of droplet phase. The author models the dynamics and kinetics of such cloud that absorbs gaseous sulphur dioxide; and obtains results on the intensity of absorption of this pollutant in the forming cloud. The comparison of these results with the known data (Hrgian-Mazin droplet distribution and interpolation ratio for the water level of the cloud) demonstrated quite a coincidence of droplet distribution and water level of the cloud. The conclusion is made on sufficient adequacy of application of the ecological model that includes a special submodel of droplet phase.
Pekunov V.V. —
Modification of the Marquardt method for training a neural network predictor in eddy viscosity models
// Cybernetics and programming. – 2021. – ¹ 1.
– P. 27 - 34.
DOI: 10.25136/2644-5522.2021.1.36059
URL: https://en.e-notabene.ru/kp/article_36059.html
Read the article
Abstract: The subject of this article is the numerical optimization techniques used in training neural networks that serve as predicate components in certain modern eddy viscosity models. Qualitative solution to the problem of training (minimization of the functional of neural network offsets) often requires significant computational costs, which necessitates to increase the speed of such training based on combination of numerical methods and parallelization of calculations. The Marquardt method draws particular interest, as it contains the parameter that allows speeding up the solution by switching the method from the descent away from the solution to the Newton’s method of approximate solution. The article offers modification of the Marquardt method, which uses the limited series of random samples for improving the current point and calculate the parameter of the method. The author demonstrate descent characteristics of the method in numerical experiments, both on the test functions of Himmelblau and Rosenbrock, as well as the actual task of training the neural network predictor applies in modeling of the turbulent flows. The use of this method may significantly speed up the training of neural network predictor in corrective models of eddy viscosity. The method is less time-consuming in comparison with random search, namely in terms of a small amount of compute kernels; however, it provides solution that is close to the result of random search and is better than the original Marquardt method.
Pekunov V.V. —
Induction of the transformation rules of the natural language problem statement into a semantic model for generating solver
// Software systems and computational methods. – 2020. – ¹ 3.
– P. 29 - 39.
DOI: 10.7256/2454-0714.2020.3.33789
URL: https://en.e-notabene.ru/itmag/article_33789.html
Read the article
Abstract: The author considers a problem of automatic synthesis (induction) of the rules for transforming the natural language formulation of the problem into a semantic model of the problem. According to this model a program that solves this problem can be generated. The problem is considered in relation to the system of generation, recognition and transformation of programs PGEN ++. Based on the analysis of literary sources, a combined approach was chosen to solve this problem, within which the rules for transforming the natural language formulation into a semantic model of the problem are generated automatically, and the specifications of the generating classes and the rules for generating a program from the model are written manually by a specialist in a specific subject area. Within the framework of object-event models, for the first time, a mechanism for the automatic generation of recognizing scripts and related entities (CSV tables, XPath functions) was proposed. Generation is based on the analysis of the training sample, which includes sentences describing objects in the subject area, in combination with instances of such objects. The analysis is performed by searching for unique keywords and characteristic grammatical relationships, followed by the application of simple eliminative-inducing schemes. A mechanism for the automatic generation of rules for replenishing / completing the primary recognized models to full meaning ones is also proposed. Such generation is performed by analyzing the relations between the objects of the training sample, taking into account information from the specifications of the classes of the subject area. The proposed schemes have been tested on the subject area "Simple vector data processing", the successful transformation of natural language statements (both included in the training set and modified) into semantic models with the subsequent generation of programs solving the assigned tasks is shown.
Pekunov V.V. —
Over-optimistic computing: concept and approbation in the problem of modeling an electrostatic lens
// Software systems and computational methods. – 2020. – ¹ 2.
– P. 37 - 44.
DOI: 10.7256/2454-0714.2020.2.32232
URL: https://en.e-notabene.ru/itmag/article_32232.html
Read the article
Abstract: The subject of research is the possibility of parallelizing loops with dependent iterations and a body, in which the order of execution of operators is strictly defined. Such loops are quite often encountered, for example, in problems of numerical simulation by the method of particles in cells, where at the first stage of the cycle body execution the particles are processed and a certain field determined by them is calculated, and at the second stage a partial differential equation dependent on this field is solved. The possibility of parallelizing the bodies of such cycles is currently insufficiently covered in the literature, this topic is relevant. The ideas of applying predictive (autoregressive point) channels in the programmed transactional memory are used. The implementation is built using object-oriented programming. For the first time, the concept of super-optimistic computing was formulated, that is, working with predictive channels in conditions of partially transactional memory. Mechanisms for the implementation of partially transactional memory, adapted to the use of channels, are proposed. A scheme for parallelizing linearly executed cycle bodies (with dependent loops) on the basis of super-optimistic calculations is proposed, its justification is shown on the example of solving the problem of modeling a beam of charged particles in an electrostatic lens.
Pekunov V.V. —
Simulation of the absorption of gaseous SO2 by fog droplets using a refined interpolation-sectional droplet model
// Cybernetics and programming. – 2020. – ¹ 2.
– P. 19 - 32.
DOI: 10.25136/2644-5522.2020.2.33914
URL: https://en.e-notabene.ru/kp/article_33914.html
Read the article
Abstract: This article examines the problem of numerical simulation of interaction between the gaseous sulfur dioxide emitted by road transport and fog in the conditions of high humidity. For this purpose, the author applies a multi-factor two-phase mathematical model, which takes into account the dynamics of turbulent main phase, dynamics and kinetics of the multi-sectional droplet phase, presence of thermal inconsistencies formed as a result of direct and diffused solar radiation in various ranges, diffusion of sulfur dioxide, and its absorption by the fog droplets. The article carries out a numerical calculation of the corresponding task within the modeling system of environmental processes AirEcology-P, which allows generating the optimal calculation code for a particular mathematical model. The proposed complex mathematical model that descries interaction between the emitted sulfur dioxide gas and the fog droplets is new; it specifies the calculation of the kinetics of droplet phase based on consideration of the additional factor of droplet fusion characteristic to fog. The submodel of the droplet phase was tested in the numerical simulation (the results were compared with the data of direct Lagrangian modeling of the composite of 1,000 droplets), indicating decent accuracy results. The article obtains the results of numerical simulation of interaction between the emitted SO2 and the droplets. The author demonstrates the self-cleaning ability of the atmosphere, the degree of which correlates with the initial concentration of the smallest droplets and the height from the surface.
Pekunov V.V. —
Elements of XPath-like languages in problems of constructing semantic XML models of natural language texts
// Cybernetics and programming. – 2020. – ¹ 1.
– P. 29 - 41.
DOI: 10.25136/2644-5522.2020.1.32143
URL: https://en.e-notabene.ru/kp/article_32143.html
Read the article
Abstract: The subject of the research is the possibility of using XPath-like micro-languages of programming in the generation systems of programs of the PGEN ++ class for the selection and completion of XML-models describing the plan for solving the original problem, according to which the solver program is generated. It is supposed to build such models according to the description of the problem in natural language, thus, we are talking about elements of artificial intelligence technologies. XPath-like language works in the layer of regular-logical expressions (highlighting elements of the primary XML document), performing primary processing of the data obtained in the layer of grammatical parsing of the source text. In addition, XPath-like elements are used to render the final XML models. The standard natural language parsing libraries are used. Non-standard XPath query language extensions are used. For the first time, the idea of expanding the XPath query language to the level of an algorithmic language by introducing the minimum required number of syntactic elements is proposed. It is also proposed to use syntactic elements with an XPath-like structure as both generating and controlling weak constraints of the process of direct inference of final semantic XML models.
Pekunov V.V. —
Refined calculation of droplet distributions in modeling atmospheric multiphase media
// Software systems and computational methods. – 2019. – ¹ 4.
– P. 95 - 104.
DOI: 10.7256/2454-0714.2019.4.30707
URL: https://en.e-notabene.ru/itmag/article_30707.html
Read the article
Abstract: In this article the author considers the problem of increasing the accuracy of the search for adequate droplet distributions in the numerical simulation of multiphase media including the droplet phase. This problem is especially relevant when calculating the distributions with discontinuities that occur during intercellular droplet transfer, which have their own speed, as well as in the presence of sharp drops droplets, for example, of a technogenic nature. Tasks of this kind are often encountered in calculating the processes of formation and spread of pollutants in the air, in particular when modeling acid rain. The problem of constructing distributions is considered using the methods of computational mathematics (theory of interpolation), taking into account the physical laws of conservation of mass and number of drops. The elements of the method of moments (Hill method) and the sectional approach to modeling the droplet phase are used. A new approach is proposed for modeling droplet distributions by piecewise spline interpolation according to the density and concentration of droplet components, also relying on the constructed preliminary piecewise linear distributions. The results were compared with data obtained by direct modeling of many drops, as well as data obtained using exclusively piecewise linear distributions. The higher accuracy of the proposed approach is demonstrated in comparison with the original method using only piecewise linear distributions and a rather high calculation speed is shown in comparison with the Lagrangian approach.
Pekunov V.V. —
Predicting channels in parallel programming: possible applications in mathematical modeling of processes in continuous media
// Software systems and computational methods. – 2019. – ¹ 3.
– P. 37 - 48.
DOI: 10.7256/2454-0714.2019.3.30393
URL: https://en.e-notabene.ru/itmag/article_30393.html
Read the article
Abstract: In this paper, the author considers the problem of applying prediction in classical and parallel programming of mathematical modeling problems based on the numerical integration of partial differential equations. Prediction can be used to replace fragments of a mathematical model with simpler approximate relations, to anticipate the values received during parallel counting, to predict the execution time of fragments of a parallel program when deciding whether to use a sequential or parallel algorithm and when balancing the load of individual processors of a computing system. To formalize the types of predictors and determine the mathematical relationships that allow them to be calculated, the approaches and methods of computational mathematics (theory of interpolation and extrapolation) are used. To determine the software implementations of predictors, approaches that are characteristic of parallel programming engineering are used. The results are verified experimentally. For the first time, a new type of technological means of prediction for use in parallel programming is proposed - prediction-solving channels. Two types of channels are proposed - autoregressive point and linear (explicit or implicit) collective solutions. The mathematical aspects of prediction in such channels are described, basic programming tools are briefly described. It is shown that combining channels with data prediction tools simplifies the programming of a number of algorithms related to numerical modeling, and allows, in particular, hidden transitions from explicit difference schemes to implicit ones, which are more stable in counting, as well as from sequential counting algorithms to parallel ones. Using the example of the problem of numerical integration of the non-stationary heat equation, it is shown that the adequate use of channels in some cases allows you to speed up the calculation on multi-core computing systems.
Pekunov V.V. —
Application of prediction in parallel processing of chains of predicates in regular-logic expressions
// Cybernetics and programming. – 2018. – ¹ 6.
– P. 48 - 55.
DOI: 10.25136/2644-5522.2018.6.27986
URL: https://en.e-notabene.ru/kp/article_27986.html
Read the article
Abstract: This paper addresses the problem of choosing the execution mode (sequential or parallel) when processing chains of predicates in regular-logic expressions. A brief description of the essence of regular-logical expressions, their known applications (natural language interfaces, automatic parallelizer of C-programs), types and composition of predicate chains is given. Particular attention is paid to the question of the prediction of time spent on processing chains in one mode or another. Various approaches to such a possible prediction are considered in detail. It is noted that in this case the semi-empirical-statistical approach is the most natural. The paper uses the basic relations of the theory of parallel computing, interpolation and extrapolation methods, computational experiment, elements of statistical processing. A new semi-empirical-statistical approach to solving the problem of calculating estimates of the execution time of chains of predicates is proposed. The approach is distinguished by the minimum amount of time measurement achieved using partial recovery of missing data, and the use of potentially more accurate linear autoregressive and quadratic models to calculate the estimated execution time in sequential or parallel modes.
Pekunov V.V. —
Automatic parallelization of C programs using Cilk ++ directives based on recognizing object-event models
// Software systems and computational methods. – 2018. – ¹ 4.
– P. 124 - 133.
DOI: 10.7256/2454-0714.2018.4.28086
URL: https://en.e-notabene.ru/itmag/article_28086.html
Read the article
Abstract: In this paper, the author considers the problem of automatic parallelization of C programs (mainly computational) with the use of Cilk ++ directives, with the help of a limited set of which parallelism in tasks can be clearly expressed. To solve this problem, the concept of recognizing object-event models, potentially capable of parsing and transforming arbitrary texts, is formulated. This concept is a development of the theory of object-event models proposed by the author earlier, which, in the marginal formulation, are equivalent to advanced Turing machines. A general approach of the theory of object-event models is used, which asserts the possibility of describing arbitrary algorithms using these models. The technology of analysis and transformation of both structured and non-structured texts with the use of recognizing object-event models is proposed. A strategy is proposed for automatic parallelization of C programs using Cilk ++ directives based on this technology. Using the example of automatic parallelization of a simple computing program, data on acceleration and efficiency of parallelization are obtained. It is argued that the developed technology can be used as part of a program generating system for parallelizing the generated programs.