Agibalov O.I., Ventsov N.N. —
Assessment of parameters and results of genetic algorithms performed on the GPU and CPU
// Software systems and computational methods. – 2019. – ¹ 3.
– P. 12 - 19.
DOI: 10.7256/2454-0714.2019.3.30502
URL: https://en.e-notabene.ru/itmag/article_30502.html
Read the article
Abstract: The object of research is the process of choosing the optimal hardware architecture for organizing resource-intensive computing. The subject of the research is the process of solving optimization problems by genetic algorithms on GPU and CPU architectures. The influence of the choice of hardware architecture on the process of solving the optimization problem is shown: the absolute and relative dependences of the slowdown of the computing process, when choosing an irrational hardware architecture, on the number of individuals processed by the algorithm are determined. It is established that for the considered problem, the boundary of the most efficient hardware configuration can be in the range from 1000 to 5000 individuals. For this reason, it is advisable to describe the blurring of the boundary of an effective hardware configuration as a set of pairs “number of individuals — membership in a transition”. The research method is based on an analysis of the results of a computational experiment. The purpose of the experiment is to determine the dependencies of the runtime of the genetic algorithm on the GPU and CPU architectures on the number of individuals generated (chromosomes). The dependences of the minimum and maximum time of the genetic algorithm running on the GPU and CPU on the number of individuals are compared. It is shown that when solving the considered problem, the minimum and maximum time dependences of the algorithm performed on the GPU are close to a linear function; the minimum time dependences of the algorithm performed on CPU are close to a linear function, and the maximum to polynomial.
Chernyshev Y.O., Ventsov N.N., Pshenichnyi I.S. —
A possible method of allocating resources in destructive conditions
// Cybernetics and programming. – 2018. – ¹ 5.
– P. 1 - 7.
DOI: 10.25136/2644-5522.2018.5.27626
URL: https://en.e-notabene.ru/kp/article_27626.html
Read the article
Abstract: The subject of research is the approach to the allocation of resources in terms of possible destructive conditions.The object of the research is a model of decision-making processes of a distributional nature under the conditions of possible destructive influences. The authors consider the issues of modeling the processes of resource flow distribution under the conditions of possible undesirable effects. It is shown that the use of relative fuzzy estimates of resource transfer routes is more expedient than modeling the entire resource allocation area in terms of the time complexity of the decision-making process, since, based on statistical and expert assessments, route preferences can be quickly determined from the point of view of guaranteed resource transfer under destructive impacts.
The research method is based on the use of set theory, fuzzy logic, evolutionary and immune approaches. The use of fuzzy preference relations reduces the time to build a model, and the use of evolutionary and immune methods to speed up the search for a solution. The main conclusion of the study is the possibility of using relative fuzzy estimates of the preferences of the used routes when organizing the allocation of resources. An algorithm for the allocation of resources in the context of destructive influences is proposed, a distinctive feature of which is the use of information about previously implemented resource allocations in the formation of a set of initial solutions. Verification of the solutions obtained is supposed to be carried out using the method of negative selection - one of the methods of modeling the immune system. Modification of existing solutions is advisable to produce, for example, using the methods of evolutionary modeling.
Agibalov O.I., Ventsov N.N. —
Assessing Time Dependencies of the Genetic Algorithm Carried Out on CPU and GPU
// Cybernetics and programming. – 2017. – ¹ 6.
– P. 1 - 8.
DOI: 10.25136/2644-5522.2017.6.24509
URL: https://en.e-notabene.ru/kp/article_24509.html
Read the article
Abstract: The subject of the research is the problem of choosing the most efficient hardware architecture to execute a stochastic population-based algorithm. The object of the research is the genetic algorithm carried out using the central processing unit (CPU) or graphics processing unit (GPU). In their research the authors give results of a computational experiment aimed at comparing time dependencies of the genetic algorithm executed on the central processing unit or graphics processing unit based on the number of chromosomes used. The authors also compare the overall time of task solutions and time necessary to initialize CPU and GPU. Due to the fact that it was impossible to obtain the precise time assessment of the genetic algorithm, the authors have developed a loose time assessment of GPU-algorithm for 3000 chromosomes. The research method is based on the experimental assessment of time dependencies of the genetic algorithm executed using CPU or GPU based on the number of species in the population. The computational complexity of the genetic algorithm for both types of processing units is approximately O(n)-O(n2). Based on the results the authors have stated that in cases when the population is 2000-2500 chromosomes, the genetic algorithm should be better executed using CPU and when the population exceeds 3000-4000 chromosomes it is better to execute it using GPU. Such unclarity of efficiency frontiers is caused by the stochastic nature of the genetic algorithm. It should be also noted that these frontiers for choosing the most efficient hardware architecture are right exclusively for solving the above mentioned task. The results will be different for simpler tasks and other hardware and software conditions. The present research focuses not only on the numerical assessment of efficiency frontiers but on whether such crossing point can be defined or not.
Chernyshev Y.O., Ventsov N.N. —
Development of receptive to fuzzy commands decoders for artificial immune system
// Cybernetics and programming. – 2016. – ¹ 5.
– P. 213 - 221.
DOI: 10.7256/2306-4196.2016.5.19885
URL: https://en.e-notabene.ru/kp/article_19885.html
Read the article
Abstract: The object of research is the model of artificial immune system. Subject of the research is providing a method of constructing a fuzzy decoder. The authors proposed to use fuzzy membership function as the decoders. This functions describes the relevance of a controlled parameter to a critical situation. Using such an approach based on fuzzy decoders allows to move from binary quantitative classification to fuzzy qualitative estimates. The article present an example o f construction of a decoder for fuzzy term “semiperimeter length of L, describing a fragment of the designed product, should be no more than 0.7 nm”. On the basis of the function CON(μ1(L)), describing fuzzy matching condition “very close to 0.7 nm” the authors build a function μ5(L), describing fuzzy matching condition “a little less than 0.7 nm”. Fuzzy decoder for conformity assessment interval is based on the given interval membership function. The authors give a graph of a μ7 decoder function semiperimeter on the length L, describing the belonging to “semiperimeter desired length from 0.55 to 0.7 nm” condition. By analogy with the conditions “very close to 0.7 nm” and “slightly close to 0.7 nm” it is possible to determine a membership functions “very in range from 0.55 to 0.7 nm” and “slightly in range from 0.55 to 0.7 nm”. The research method is based on the construction of fuzzy decoders describing the undesirable state of the computational process. Fuzziness is described by the membership function. The novelty of the research is in getting fuzzy decoders receptive to fuzzy commands. Using the corresponding fuzzy membership function μ decoder it is possible adjust the process of estimating the degree of closeness of the controlled parameter to a critical situation. Applying CON and DIL functions to the decoder functions allows to change their susceptibility on test data from 20-30% up to 200% -300%.