Citations count: 4
Reference:
Mustafaev A.G. —
The use of artificial neural networks for the early diagnosis of diabetes
// Cybernetics and programming.
– 2016. – ¹ 2.
– P. 1 - 7.
DOI: 10.7256/2306-4196.2016.2.17904 URL: https://en.nbpublish.com/library_read_article.php?id=17904
Read the article
Abstract:
Diabetes is a chronic disease, in the pathogenesis of which is a lack of insulin in the human body causing a metabolic disorder and pathological changes in various organs and tissues, often leading to a high risk of heart attack and kidney failure. The author makes an attempt to create a system for early diagnosis of diabetes patients using the device of artificial neural networks. The article presents a model of neural network based on multilayer perceptron trained by back-propagation algorithm. For the design of the neural network the author used Neural Network Toolbox èç MATLAB 8.6 (R2015b) which is a powerful and flexible tool for working with neural networks. The results of training and performance tests of the neural network designed show its successful application for the task and the ability to find patterns and complex relationships between the different characteristics of the object. The sensitivity of the developed neural network model is 89.5%, specificity of 87.2%. Once the network is trained it becomes a reliable and inexpensive diagnostic tool.
Citations count: 4
Reference:
Mukhametzyanov I.Z. —
Application of fuzzy inference & fuzzy AHP approach for evaluating the dependability of the equipment.
// Cybernetics and programming.
– 2017. – ¹ 2.
– P. 59 - 77.
DOI: 10.7256/2306-4196.2017.2.21794 URL: https://en.nbpublish.com/library_read_article.php?id=21794
Read the article
Abstract:
The object of study involves fuzzy logical multi-criterion methods and algorithms within the support systems for the decision making. The immediate object of study involves support systems for decision making in the sphere of dependability of technical equipment systems in the situation of fuzzy information input. The purpose of the study is to provide methodological basis for the development of applied fuzzy systems for the traditional priority studies for the multiple objects in the multi-dimensional set of quantity and quality markers based upon the linguistic statements by the experts. The article provides for the methodology of development for the support system for the decision-making in the conditions of non-precise information with the use of fuzzy theory of sets and fuzzy methods for the hierarchy analysis. The author provides detailed analysis for several aspects of the topic in question, such as application of decision-making methods for multi-criteria alternative analysis, such as the fuzzy inference and the hierarchy analysis method for fuzzy proximities. The author offers the method for ranging alternatives based upon the multi-dimensional sets of facts and criteria in the situation of fuzzy data input. The methodology of studies is based upon the formation of a model for the decision-making support system, method formalization for processing fuzzy data, algorithm development and providing for the simulation experiment for various values of managing parameters within a model. The provided methodology was implemented based upon an example of the support system for the decision-making for the expert procedure for the evaluation of general dependability of chemical technological systems. Implementation of fuzzy logic procedures when managing a complex of dependability markers is based upon the results of expert evaluation of four separate industrial objects within a single complicated technical system of oil and gas chemical production based upon five dependability criteria. Taking an example of the hierarchical structure for dependability of oil and gas equipment, the author offers a model and an algorithm for the evaluation for deriving weights with the use of a fuzzy pairwise comparison matrix based upon the judgment matrix. The experimental calculation results show that the fuzzy pairwise comparison method is efficient with greater degrees of priority fuzziness 50 to 75 percent. Efficiency of the judgment matrix depends upon the evaluation closeness or incoming linguistic values, however, it is completely dependent upon the correct formalization of data input via formation of the membership functions as well as on the formation of fuzzy rule bases. Fuzzy logical algorithms for decision making support in the sphere of managing the complex of dependability markers for the oil and gas equipment form a non-formalized part of the complex management and support systems for ensuring industrial equipment dependability. Such sub-systems allow for the preliminary evaluation of the general situation in the sphere of equipment dependability based upon the expert information.
Citations count: 3
Reference:
Surma I.V. —
Analytical and information technology and human resource management
// Cybernetics and programming.
– 2015. – ¹ 2.
– P. 1 - 45.
DOI: 10.7256/2306-4196.2015.2.15002 URL: https://en.nbpublish.com/library_read_article.php?id=15002
Read the article
Abstract:
The object of study in the article are computer intelligent information management decision support system, using such basic principles on the implementation of information technology management as the principles of adaptive, integrated, network management and real-time control, that can supply leadership and staff with easy to use and powerful by internal content means of solving the basic problems of organization management. The article describes the different management information systems such as CRM (Customer Relationship Management), ERP (Enterprise Resource Planning), systems of information support of analytical activities BI (Business Intelligence), as well as various types of real-time analytical information processing technologies - OLAP (Online Analytical Processing) systems. The article uses a comparative analysis of various data storages with a set of tools to retrieve data from ERP and other systems and ways of further analysis of the collected data. Using the systems of information support of analytical activities data resources of any organization can be converted into pure information and can be used as a basis for making effective management decisions. BI systems are considered as decisions on the basis of OLAP business analysis systems and DSS (Decision Support Systems), an integrated set of tools for processing information and data (obtained by other corporate information systems and different methods of processing these data) is used for strategic analysis of the company.
Citations count: 3
Reference:
Astakhova N., Demidova L., Nikulchev E. —
Application of multiobjective pptimization for time series groups forecasting
// Cybernetics and programming.
– 2016. – ¹ 5.
– P. 175 - 190.
DOI: 10.7256/2306-4196.2016.5.20414 URL: https://en.nbpublish.com/library_read_article.php?id=20414
Read the article
Abstract:
In article the approach to forecasting of time series groups with use of technologies of the cluster analysis and the principles of multiobjective optimization has been offered. The description of time series – centroids of clusters with use of the forecasting models on the base of the strictly binary trees and the multiobjective modified clonal selection algorithm in case of which the implementation of two quality indicators of models – the affinity indicator based on calculation of an average forecasting relative error and the tendencies discrepancy indicator are involved in selection process of the best forecasting models has been developed. Accounting of quality two indicators of the forecasting model is realized with use of the Pareto-dominance principles applied when forming new populations of the forecasting models in the multiobjective modified clonal selection algorithm. Within the solution of a problem of multiobjective optimization when forming new population of the forecasting models for maintenance of its high variety it is offered to consider values of the crowding distance of the forecasting models. Prospects of application of the general forecasting models created on the base of the strictly binary trees for forecasting of the time series entering one cluster are shown. The results of experimental studies confirming the efficiency of the offered approach to short-term and mid-term forecasting of time series groups within the solution of a problem of multiobjective optimization are given.
Citations count: 2
Reference:
Borodin A.V. —
Feasibility study on solution of redundant network component of the fault-tolerant scalable computing system of a special-purpose
// Cybernetics and programming.
– 2015. – ¹ 6.
– P. 55 - 70.
DOI: 10.7256/2306-4196.2015.6.17523 URL: https://en.nbpublish.com/library_read_article.php?id=17523
Read the article
Abstract:
The research is devoted to architectural aspects of creation of fault-tolerant scalable computing systems of a special-purpose. In particular the study focuses on the principles of backup which can be used in a network subsystem of the computing system in the conditions of essential dependence of total cost of ownership of system on a level of degradation of performance metrics. Authors consider such approaches to backup as duplicating and a triple redundancy. For the principle of a triple redundancy in this research the new concept of the functional adaptation of elements of redundancy is offered. Special attention in the paper is paid to the dependence of a "Value at Risk" measure of risk, that characterizes by a random variable of total cost of ownership of the computing system and defines the greatest possible loss at the given level of probability, from such parameters of a system as number of the functional groups of hosts and a level of influence of single and group faults on degradation of performance metrics. For a risk process description in the computing system the notation of ordinary stochastic Petri nets is used. For computation of a measure of risk of "Value at Risk" on the given time-interval the methods of the algebraic theory of risk are used. The main result of the research is in proving the concept of productivity of a triple redundancy approach with the functional adaptation of elements of redundancy in the task of synthesis of topology of a network subsystem. The novelty of research consists in use of methods of the algebraic theory of risk in a task of synthesis of an optimum architecture of computing systems on the given discrete sets of possible decisions.
Citations count: 2
Reference:
Alpatov A.N. —
Assessment of the impact of the distributed computing complex parameters of on the performance of load balancing algorithms.
// Cybernetics and programming.
– 2017. – ¹ 1.
– P. 1 - 10.
DOI: 10.7256/2306-4196.2017.1.22021 URL: https://en.nbpublish.com/library_read_article.php?id=22021
Read the article
Abstract:
The purpose of this article is to consider the correlation of the performance of computational load balancing algorithms for globally distributed computing systems implementing the principle of volunteer computing and the basic attributes of a distributed system. As the main considered parameters the author examines file system structure and the type of network protocol. The object of the study is a globally distributed computing system with scheduling nodes loading. The subject of the study are the balancing methods of loading units of the system, implementing the principle of dynamic computational load balancing strategy. In this article, the methodological basis of the article makes methods of fundamental and applied sciences: analysis methods, methods of mathematical statistics, simulation modeling. The author suggests a model of node computational load in form of nonlinear piecewise-stationary model. The paper shows a method of computing experiment to determine the effectiveness of the balancing algorithms. The author develops a simulation model of distributed complex with the possibility of setting the basic system parameters computer system and assess their impact on system response time. It is shown that a particular impact on the efficiency of load balancing algorithms have such parameters as file system structure and the type of network protocol. Thus, the necessity of taking into account these parameters to ensure the adequacy of the developed model of the distributed computing system implemented on the basis of volunteer computing.
Citations count: 2
Reference:
Zhilov R.A. —
Optimization of cognitive maps used in forecasting
// Cybernetics and programming.
– 2015. – ¹ 5.
– P. 128 - 135.
DOI: 10.7256/2306-4196.2015.5.16592 URL: https://en.nbpublish.com/library_read_article.php?id=16592
Read the article
Abstract:
The subject of the study is the need for decision-making in the semi structured dynamic situations, when the parameters, the laws and regularities of the situation are described qualitatively. Those are unique situations, in which the dynamics of the parameters of the situation is combined with the difficulty of predicting changes in its structure. Object of research are cognitive maps or computer systems for modeling cognitive maps. The author considers in detail such aspects of the topics as the complexity of the decision making when the information is insufficient and inaccurate, methods of optimization for cognitive maps used in forecasting. Special attention is paid to the optimization of data dimension algorithm and optimization of the cognitive maps structure. To optimize the dimensions of the data the author uses cluster analysis methods, optimization of the structure of cognitive maps is performed by automatic changing of the weights of the impact of concepts to other machine learning methods. The main conclusion of the study is the decrease in subjectivism of the cognitive map in predicting system. Authors' contributions to the study of the subject is a representation of the cognitive map as a single-layer neural network and the application of methods of neural networks training, proposed by Rosenblatt. The novelty of the study is in the new structure of cognitive maps.
Citations count: 2
Reference:
Pekunov V.V. —
Improved CPU load balancing for numerical solution of the tasks of continuous medium mechanics complicated by chemical kinetics
// Cybernetics and programming.
– 2021. – ¹ 1.
– P. 13 - 19.
DOI: 10.25136/2644-5522.2021.1.35101 URL: https://en.nbpublish.com/library_read_article.php?id=35101
Read the article
Abstract:
This article explores certain aspects of the process of numerical solution of the tasks of continuous medium mechanics in the conditions of ongoing chemical reactions. Such tasks are usually characterized by the presence of multiple local areas with elevated temperature, which position in space is relatively unstable. In such conditions, rigidly stable methods of integration with step control, which in the “elevated temperature” areas that have higher time input comparing to other areas. In terms of using geometric parallelism, this fact leads to substantial imbalance of CPU load, which reduces the overall effectiveness of parallelization. Therefore, this article examines the problem of CPU load balancing in the context of parallel solution of the aforementioned tasks. The other offers a new modification of the algorithm of large-block distributed balancing with improved time prediction of the numerical integration of chemical kinetics equations, which is most effective in the conditions of drift of the areas with “elevated temperatures”. The improvement consists in application of the linear perceptron, which analyzes several previous values of time integration (the basic version of the algorithm uses only one previous spot from the history of time integration). This allows working in the conditions of fast and slow drift of the areas with “elevated temperatures”. The effectiveness of this approach is demonstrated on the task of modeling the flow-around the building with high-temperature combustion on its roof. It is indicated that the application of modified algorithm increases the effectiveness of parallelization by 2.1% compared to the initial algorithm.
Citations count: 2
Reference:
Nekhaev I.N., Zhuykov I.V. —
Decision Making Modelos in Intelligence Systems of Testing Competency Levels
// Cybernetics and programming.
– 2016. – ¹ 4.
– P. 18 - 34.
DOI: 10.7256/2306-4196.2016.4.19863 URL: https://en.nbpublish.com/library_read_article.php?id=19863
Read the article
Abstract:
The article is devoted to the question whether it is possible to execute modeling and computer-aided evaluation of students' competency levels based on the analysis of their decisions. The emphasis is made on the method of constructing the structure of complicating of routine tasks and operational space of potential solutions. Based on the preset operational space, the authors of the article suggest to create expert agents which solutions will serve as the basis for further analysis of rationality and completeness of students' decisions. As a result of the students' decision anaylsis, the authors develop their competency map. The level of competency is defined by the difficulty of routine tasks soloved by students as part of the given knowledge module. The research involves modeling particular components of the decision making process and interaction methods when making a decision about the level of students' competency. The main conclusions of the research are the following. Firstly, the aforesaid model of the decision operational space, on the one hand, limits a possible set of applicable basic operations and enables formulation and logging of users' decisions by the testing system and, on the other hand, gives freedom to students in the process of constructing potential decisions. Secondly, the array of routine task complication constitutes a good basis for constructing the competency map of a student in a form of an overlay model, allows to model and evaluate the level of competency in accorance with the level of difficulty of tasks being solved, and enables to arrange for adaptive testing that takes into account individual features of students. Thirdly, the data structures introduced in the operational space of decision making can be adjusted in accordance with the incoming information in order to be adequate. The models reviewed create a good basis for implementing multiagent and neural network paradigms for programming intelligence systems of testing.
Citations count: 1
Reference:
Prokhozhev N.N., Korobeinikov A.G., Bondarenko I.B., Mikhailichenko O.V. —
Stability of the digital watermark embedded in the region of the coefficients of discrete wavelet transform to the changes of the image-container
// Cybernetics and programming.
– 2013. – ¹ 5.
– P. 18 - 28.
DOI: 10.7256/2306-4196.2013.5.9773 URL: https://en.nbpublish.com/library_read_article.php?id=9773
Read the article
Abstract:
The article deals with the stability of the digital watermark built-into the image-container through the use of steganography algorithms based on discrete wavelet transform (DWT), to external influences, such as JPEG lossy compression, filtering, noise and scaling. The author states that steganographic algorithms performing embedding can provide good secrecy of the digital watermarking, and tend to use the coefficients of correlation property between different planes of one subband having the same coordinates. It is noted that an important parameter when using steganographic algorithms based on the DWT is the choice of the level of wavelet decomposition. The authors describe the methodology which was used to assess the sustainability of the digital watermarking to to external influences on the image-container and experimental conditions. The authors also evaluated stability of DWT ti JPEF lossy compression, to Gaussian white noise, to scale the image to the image filtering. In conclusion the authors say that the results of DWT stability to external influences on the image-container confirm the theoretical advantage of using low-frequency plane wavelet decomposition in steganographic systems with high demands on DWT sustainability.
Citations count: 1
Reference:
Borovskii A.A. —
Prospects for the use of machine learning techniques in processing large volumes of historical data
// Cybernetics and programming.
– 2015. – ¹ 1.
– P. 77 - 114.
DOI: 10.7256/2306-4196.2015.1.13730 URL: https://en.nbpublish.com/library_read_article.php?id=13730
Read the article
Abstract:
In relation to the problems of development of information-analytical platform “The history of modern Russia” the author researches analytical capabilities of the modern methods of machine learning and perspectives of its’ practical use for processing and analyzing large volumes of historical data. The article reviews different strategies of applying machine learning techniques taking into account peculiarities of the studied data. Special attention is given to a problem of interpretability of different types of results, obtained using the machine learning algorithms, as well as the ability to recognize trends and anomalies. As a methodological basis of the research the author uses a theory of information systems, database theory, induction, deduction, comparative, systematic, formal logic, and other methods. The author concludes that the algorithms of machine learning can be used to effectively solve a large class of problems, related to the analysis of historical data, including finding hidden dependencies and patterns. It is noted that establishment of large-scale digital repositories of evidence of historical events makes it possible to examine and analyze the data as a specific time series allowing to investigate the change of state of the social system in time.
Citations count: 1
Reference:
Bondarenko M.A. —
Algorithm for combining sensory and synthesized video information in the aviation system of combined vision
// Cybernetics and programming.
– 2016. – ¹ 1.
– P. 236 - 257.
DOI: 10.7256/2306-4196.2016.1.17770 URL: https://en.nbpublish.com/library_read_article.php?id=17770
Read the article
Abstract:
The subjects of the research are techniques for combining sensory and synthesized video information in appliance to the aviation system of combined vision. The use of such systems allows controlling manned and unmanned aerial vehicles under conditions of low visibility by combining video information from on-board camera with video data synthesized by a priori given virtual model of a terrain. It is known that on-board navigation system measuring the position and orientation of the aircraft has accuracy errors because of which the angle of the synthesized image on a virtual model of the area does not match the foreshortening of shooting onboard cameras. This is why a procedure for combining sensory and synthesized video information in the aviation system of combined vision is needed. The study was conducted with the use of mathematical and computer modeling of combined vision systems using both synthesized and real images of an underlying terrain. The novelty of these results lies in the universality of the developed algorithm. This algorithm allows combining video content with an arbitrary data along with the possibility of its practical implementation and high quality combining. Developed algorithm for combining sensory and synthesized video information on the basis of typological binding and Kalman filtering provides a sufficiently precise and reliable combining that meets the minimum requirements for the aircraft systems combined vision. The algorithm is universal, has low demands to the scene recorded images, has low computational complexity and can be implemented in hardware and software based on modern avionics. When testing the algorithm the authors used precision characteristics at the level of consumer navigation devices which significantly inferior in accuracy compared to modern air navigation systems. This indicates the possibility of using the algorithm in inexpensive and compact user systems, as well as in mobile robots.
Citations count: 1
Reference:
Talipov N.G., Katasev A.S. —
The Decision Support System for Distributing Tasks Related to Maintenance of the Register of Personal Data Processors Based on the Fuzzy-Production Model
// Cybernetics and programming.
– 2016. – ¹ 6.
– P. 96 - 114.
DOI: 10.7256/2306-4196.2016.6.21271 URL: https://en.nbpublish.com/library_read_article.php?id=21271
Read the article
Abstract:
The subject of the present research article is the development and practical implementation of the intelligent decision support system that distributes tasks related to maintenance of the personal data processors register. The object of the research is the task of a rational choice of task performers in the system of electronic flow of documents run by the territorial body of the Federal Supervision Agency for Information Technologies and Communications. The authors examine tasks of defending the rights of personal data owners, scheme for processing and distributing these tasks of keeping the registry of personal data processors, analyze the problem of manual distribution of tasks and set a goal to automate an efficient distribution of tasks between performers. To achieve the research target, the authors offer their own fuzzy-production model that has been developed especially for these purposes. This model allows to rationally distribute tasks of different difficulty levels based on the expert evaluation of competence, working capacity and work load of performers. The authors also give a description of the decision support system and algorithm for using it in the process of tasks distribution. The scientific novelty of the authors' approach to achieving the aforesaid goal is caused by the fact that they offer an automated distribution of tasks between performers based on developing and practically implementing the fuzzy-production model. They also provide original fuzzy-production rules to formalize expert knowledge concerning a rational selection of tasks performers taking into account their competence, working capacity and current work load. Analysis of the adequacy of this fuzzy-production model as well as efficient solution of the set goals based on the developed system has proved that it is possible to considerably decrease the intellectual load of an expert in the process of tasks distribution as well as increase the speed of the decision making process by 80.3 percent.
Citations count: 1
Reference:
Dushkin R., Andronov M.G. —
Hybrid design of artificial intelligent systems
// Cybernetics and programming.
– 2019. – ¹ 4.
– P. 51 - 58.
DOI: 10.25136/2644-5522.2019.4.29809 URL: https://en.nbpublish.com/library_read_article.php?id=29809
Read the article
Abstract:
The subject of research is the architecture of artificial intelligent systems, developed as part of a hybrid approach to artificial intelligence. The article offers the author’s vision of the process of constructing artificial intelligent agents based on a hybrid approach using organismic principles. An artificial intelligent agent with a hybrid scheme is a “cybernetic machine” operating in a certain environment and functionally interacting with it. Of interest is the way the agent interacts and makes decisions, in which information from the environment passes through many sensors, and then it is cleaned up and sensory integrated with further translation into a symbolic form for decision making based on symbolic logic and the operation of a universal output machine. As the main research methodology, a systems engineering approach to the analysis and construction of technical systems was adopted, as well as a functional approach as an additional research method. The novelty of the study is in the use of a hybrid paradigm for constructing artificial intelligent systems in conjunction with systems and functional approaches in the design of technical systems, which made it possible to generalize the available data on the interaction of intelligent agents with the environment and identify interesting patterns for use in the development of artificial intelligence systems. The main conclusion of the study is the possibility of using a hybrid paradigm to obtain artificial intellectual agents that have important advantages of the upward and downward artificial intelligence paradigm - the ability to learn and behave appropriately in an unknown environment and the ability to explain the reasons for their decisions, respectively. This important finding will advance research into explainable artificial intelligence.
Citations count: 1
Reference:
Pekunov V.V. —
Elements of XPath-like languages in problems of constructing semantic XML models of natural language texts
// Cybernetics and programming.
– 2020. – ¹ 1.
– P. 29 - 41.
DOI: 10.25136/2644-5522.2020.1.32143 URL: https://en.nbpublish.com/library_read_article.php?id=32143
Read the article
Abstract:
The subject of the research is the possibility of using XPath-like micro-languages of programming in the generation systems of programs of the PGEN ++ class for the selection and completion of XML-models describing the plan for solving the original problem, according to which the solver program is generated. It is supposed to build such models according to the description of the problem in natural language, thus, we are talking about elements of artificial intelligence technologies. XPath-like language works in the layer of regular-logical expressions (highlighting elements of the primary XML document), performing primary processing of the data obtained in the layer of grammatical parsing of the source text. In addition, XPath-like elements are used to render the final XML models. The standard natural language parsing libraries are used. Non-standard XPath query language extensions are used. For the first time, the idea of expanding the XPath query language to the level of an algorithmic language by introducing the minimum required number of syntactic elements is proposed. It is also proposed to use syntactic elements with an XPath-like structure as both generating and controlling weak constraints of the process of direct inference of final semantic XML models.
Citations count: 1
Reference:
Gorbaneva O.I., Murzin A.D., Ougolnitsky G.A. —
Modeling the coordination of general and private interests of the development of economic entities
// Cybernetics and programming.
– 2020. – ¹ 1.
– P. 1 - 8.
DOI: 10.25136/2644-5522.2020.1.33213 URL: https://en.nbpublish.com/library_read_article.php?id=33213
Read the article
Abstract:
The article presents a dynamic socio-ecological-economic model of the synergetic development of individual economic entities, which makes it possible to reconcile their general and private interests. Maximization of specific consumption is proposed as a target parameter. Accordingly, in relation to private interests - the subject, in relation to general interests - the governing center. The model provides for the possibility of using the available resources by each entity both in the tasks of their own development and in the general development goals of other entities. The presented model can be used to form an economic development program for a separate territory, a separate industrial cluster, a municipality, a region and a macro-region. As a result of modeling, control parameters are formed for developing a strategy for economic development. A model of territorial development based on two models is presented: a dynamic model of the combination of general and private interests and a neoclassical Solow model. It is proposed to use a model for regional systems, where the elements of the system are regions within one macro-region. The parameters of the model used for the macroregion are described in terms of identification. The modification of the Solow model takes into account a) threshold limits on the volume of production in the region, emissions and discharges of pollutants from the regions; b) interaction of subjects in solving joint development problems through cross-investment for a synergistic effect; c) taking into account the influence of technical progress both on the volume of the quantity of products produced and on labor productivity. As for the use of the model of the combination of general and private interests, the sum of two terms was taken as the target function of agents (regions): the maximization of specific consumption is taken as the private interests of the region, and the specific consumption of the entire macroregion is taken as the general interest.
Citations count: 1
Reference:
Selishchev I.A., Oleinikova S.A. —
Design of the database structure for software optimization of operation of the stochastic multiphase systems
// Cybernetics and programming.
– 2020. – ¹ 2.
– P. 42 - 55.
DOI: 10.25136/2644-5522.2020.2.34099 URL: https://en.nbpublish.com/library_read_article.php?id=34099
Read the article
Abstract:
The object of this research is the service systems that receive a stream of requests on their input, which represents a range of mutually dependent operations of the “finish” – “start” type. The time of conducting separate operations is a random variable, and the delivery itself requires the use of one or several types of resources. It is suggested that there are timeframes for processing the request. The goal of this research is to develop the database structure that would allow storing information on the incoming projects, operations, mutual dependence, used resources and specialists. The design of logical structure of the database was carried out using the methodology “essence – link”, which determines the data values in the context of their correlation with other data. The analysis of specificity of the object of research revealed a range of requirements submitted in the database. Leaning on these requirements along with considering normalization of relations used in the theory of relational databases, the author designed the universal structure from the perspective of its application, support of the analysis of the scheduling process, and the entirety of peculiarities of the object of research. Such database structure can be used in different fields that allow decomposition of the project into multiple separate interdependent tasks, not requiring major modifications. The article provides the examples of using the database for information systems in construction sector, as well as for the development of IT projects.
Citations count: 1
Reference:
Davydenko I.T. —
Semantic model of the knowledge base of intellectual help system
// Cybernetics and programming.
– 2013. – ¹ 2.
– P. 1 - 11.
DOI: 10.7256/2306-4196.2013.2.8307 URL: https://en.nbpublish.com/library_read_article.php?id=8307
Read the article
Abstract:
The work is devoted to the basic principles underlying the design of complex technique of semantic models of knowledge bases of intelligent information systems. The knowledge base is one of the key components of intelligent systems for various purposes. In developing the knowledge base is essential to ensure not only the ability to store knowledge and navigate through it, but also the opportunity to work on the creation and modification of knowledge distributed team of developers. The paper proposes a technology for component design of a knowledge base, based on unified semantic networks with the basic set-theoretic interpretation. This technology is a set of models, tools and methods for the design of knowledge bases. Semantic model of intellectual system knowledge base is a formal interpretation of the semantic space, which is known for intelligent system at the current time. This model is part of the technology component design knowledge base of intelligent systems based on semantic networks.
Citations count: 1
Reference:
Mironov S.V. —
Game-theoretic approach to testing compilers for the presence of undeclared capabilities of implementation mechanisms
// Cybernetics and programming.
– 2017. – ¹ 1.
– P. 119 - 127.
DOI: 10.7256/2306-4196.2017.1.20351 URL: https://en.nbpublish.com/library_read_article.php?id=20351
Read the article
Abstract:
The subject of research is mathematical software software certification procedures for information security requirements in view of time constraints, regulatory and design requirements. This essential requirement is the availability of the source code on the test software, which is quite critical for developers as a potential channel formed intellectual property leakage. To overcome this drawback, the technique of testing the compilers on the lack of mechanisms for the implementation of undeclared capabilities to stage software compilation. The research methodology combines the methods of software engineering, theory of possibilities of object-oriented programming, systems analysis, the theory of reliability. The main conclusion of the study is that by forming an optimal set of tests using the mathematical apparatus of the theory of games, spending his compiling and analyzing the control flow graphs and data obtained from the compiler output and built according to the original texts of the tests, we can conclude the presence or absence in the test compiler mechanisms introduction of undeclared capabilities in the compiled software.
Citations count: 1
Reference:
Kuchinskaya-Parovaya I.I. —
Component design of neural networks to process knowledge bases
// Cybernetics and programming.
– 2013. – ¹ 1.
– P. 9 - 15.
DOI: 10.7256/2306-4196.2013.1.8308 URL: https://en.nbpublish.com/library_read_article.php?id=8308
Read the article
Abstract:
The article describes the main steps of the methodology component design of neural networks to process knowledge bases represented by semantic networks. The technique is based on the use of a unified neural network model and component-based approach to work with neural networks. An important element of the component design of neural networks is a library of neural network compatible components . One of the possible solutions to these problems may be the development of a technique of designing and using neural networks based on the unified model of neural networks and the component approach. Component Design technique is based on the use of the library of the neural networks compatible components. It is concluded that the use of the proposed methodology of component design approach will ease the design and development of the neural networks, lower qualification requirements for the developer (the end user), as well as solve the problem of neural network integration with other methods of representation and processing of information in the development of intelligent systems.
Citations count: 1
Reference:
Mayer R.V. —
Computational experiments in studying of wave processes in linear and nonlinear media
// Cybernetics and programming.
– 2014. – ¹ 4.
– P. 57 - 65.
DOI: 10.7256/2306-4196.2014.4.12683 URL: https://en.nbpublish.com/library_read_article.php?id=12683
Read the article
Abstract:
The course of physics in colleges and universities includes various wave processes: reflection and transmission of the pulse through the interface between two media, interference, wave propagation in a dispersive medium, the formation and interaction of solutions. Here it is important to combine theoretical and practical approaches to studying of these phenomena with computer models, allowing creating visual image of the phenomenon and analyzing its behavior in different conditions. The subjects of the study are the simple computer models and computational experiments helping to show the wave processes in the one-dimensional linear and nonlinear media. The experiments require mathematical and computer modeling, building a mathematical model, creating software simulating the studied phenomenon based on the numerical solution of the corresponding system of equations. The novelty of the work is in the fact that the author presents three simple computer programs written in Pascal, simulating pulse propagation in one-dimensional medium, its reflection from the boundary between two media, and its passage of the second medium, the wave propagation in a dispersive medium, the formation of different solutions and their interactions. The analysis of the results of computer modeling allows to state that the use of such computational experiments based on the simulation of a one-dimensional medium by a system of coupled springing or simple pendulums or solving the sine-Gordon equation really allows to study the wave processes at a higher level and to form interest in physics and information technologies.
Citations count: 1
Reference:
Oleinikova S.A. —
Approximation of the distribution law of the sum of beta distributed random variables
// Cybernetics and programming.
– 2015. – ¹ 6.
– P. 35 - 54.
DOI: 10.7256/2306-4196.2015.6.17225 URL: https://en.nbpublish.com/library_read_article.php?id=17225
Read the article
Abstract:
The subject of the research in this paper is the probability density function (PDF) of the random variable, which is the sum of a finite number of beta values. This law is widespread in the theory of probability and mathematical statistics, because using it can be described by a sufficiently large number of random events, if the value of the corresponding continuous random variable concentrated in a certain range. Since the required sum of beta values can not be expressed by any of the known laws, there is the problem of estimating its density distribution. The aim is to find such approximation for the PDF of the sum of beta-values that would have the least error. To achieve this goal computational experiment was conducted, in which for a given number of beta values the numerical value of the PDF with the approximation of the desired density were compared. As the approximations it were used the normal and the beta distributions. As a conclusion of the experimental analysis the results, indicating the appropriateness the approximation of the desired law with the help of the beta distribution, were obtained. As one of the fields of application of the results the project management problem with the random durations of works is considered. Here, the key issue is the evaluation of project implementation time, which, because of the specific subject area, can be described by the sum of the beta values.
Citations count: 1
Reference:
Korobeinikov A.G., Aleksanin S.A. —
Methods of automated image processing in solving problems of magnetic defectoscopy
// Cybernetics and programming.
– 2015. – ¹ 4.
– P. 49 - 61.
DOI: 10.7256/2306-4196.2015.4.16320 URL: https://en.nbpublish.com/library_read_article.php?id=16320
Read the article
Abstract:
The subject of study in this paper is developed automated method of selecting of procedures of processing images gathered for the magnetic defectoscopy. The methods based on the analysis of magnetic fields scattering near the defects after the magnetization of these products are used to detect various defects, such as cracks, in the surface layers of steel parts. In areas where there is a discontinuity, the change of the magnetic flux is present. This effect is the basis of almost all existing methods of magnetic defectoscopy. One of the most known methods of magnetic defectoscopy of method is a magnetic powder: the surface of the magnetized part is covered with magnetic powder (dry method) or magnetic slurry (wet method). When using fluorescent powders and suspensions, the images of the studied details show visible defects significantly better. Therefore, it is possible to automate the processing of images. The paper presents an automated procedure for selecting methods of image processing. The authors give an example of processing image of steel parts for detecting defects using the luminous lines that appeared after applying the magnetic slurry. The study uses the methods of the theory of image processing. These are mainly extraction methods for defining boundaries of objects and morphological image processing. The main result of an automated method is the opportunity to obtain expert information on the basis of which it is possible to make a conclusion about the presence of defects in the test product. In the example given in the article authors show that the lines are continuous and have no sharp change of direction. Therefore, the conclusion about the absence of discontinuities (defects) in the product is made. In addition, authors point out that the binary image can be inverted at the request of the researcher.
Citations count: 1
Reference:
Fatkhullin R.R. —
Approach to integrated assessment of quality of educational institutions on the basis of the adapted methodology for assessing the effectiveness bodies of executive power subjects of the Russian Federation
// Cybernetics and programming.
– 2015. – ¹ 5.
– P. 181 - 192.
DOI: 10.7256/2306-4196.2015.5.16941 URL: https://en.nbpublish.com/library_read_article.php?id=16941
Read the article
Abstract:
One of the leading trends in the development of education in the world at present is the creation of a comprehensive quality assessment of education. In quality assessment of education we are faced with a huge number of different indicators, each of which can consist of a variety other indicators. Accordingly, the task of quality assessment of education in its mathematical formulation is multiobjective. With high rates of development educational standards, the number of such problems is growing. Today is actively developing an approach to the determination of the quality education, study various aspects, indicators and indices. Comprehensive assessment of the effectiveness educational institutions is a multi-criteria task for which you want to set the criteria and procedure for evaluation. For a comprehensive evaluation of educational institutions is considered an approach based on an adapted methodology for assessing the effectiveness of the bodies of executive power, approved by the Russian Government. In implementing these models can grow such important qualitative indicators of a comprehensive assessment of the effectiveness quality educational institutions as an objective assessment, scalability, simplicity and ease of use. The results can be widely used in a comprehensive quality assessment of education in the region at different levels and types of educational institutions.
Citations count: 1
Reference:
Korobeinikov A.G., Ismagilov V.S., Kopytenko Y.A., Petrishchev M.S. —
The study of the geoelectric structure of the crust on the basis of the analysis of the phase velocities of ultra geomagnetic variations
// Cybernetics and programming.
– 2013. – ¹ 2.
– P. 36 - 43.
DOI: 10.7256/2306-4196.2013.2.8736 URL: https://en.nbpublish.com/library_read_article.php?id=8736
Read the article
Abstract:
This article presents the results of experimental studies of the geoelectric structure of the crust held in Karelia. For research the authors established 5 highly ternary magnetovariational stations GI-MTS-1 separated by 5-10 km apart. The frequency of data recording was 50 Hz. To analyze the changes in apparent resistivity with depth at all 5 locations the authors perofmed processing of input data by two methods - magnetotelluric and phase-gradient sensing. To determine the apparent resistivity in each of the magnetic stations authors determined the apparent magnetotelluric resistivity of the earth's crust and changes in the system of resistivity with depth. Preliminary processing results revealed a number of conductive layers in the earth's crust at depths of 2-3 and 15-20 km, probably related to the shungite-bearing horizons. Comparison of methods for the interpretation of magnetotelluric and phase gradient sensing has shown their good match.
Citations count: 1
Reference:
Mukhametzyanov I.Z. —
Identification of the Structure in Computer Simulations of Clusters in Oil Disperse Systems
// Cybernetics and programming.
– 2016. – ¹ 3.
– P. 66 - 75.
DOI: 10.7256/2306-4196.2016.3.19244 URL: https://en.nbpublish.com/library_read_article.php?id=19244
Read the article
Abstract:
The subject of the research is the identification of the cluster system in general during computer simulation of the cluster-cluster agregation process. The object of the research is the computer simulation model of formation/destruction of macromolecular clusters heavy oil and reisual oil products. The researcher examines such aspects of the topic as developing significant indicators for identifying the cluster system for the cluster-cluster agregation simulation model. The author analyzes two kinds of indicators, average statistical dispersion that characterizes homogeneity of clusters, and entropy of the cluster system that characterizes the order of the cluster system. The research method is based on the numerical experiment involving various managing parameters of the model and following statistical analysis of indicators under review. Evaluation of the quality of the cluster system identification integral indicators is performed based on the minimum criterion of the variation ratio and verification of statistical hypotheses about a significant difference between indicators when changing managing parameters of the model. Based on the results of the numerical experiment a so called 'divergence ratio' has been defined as the best metrics for the statistical dispersion indicator depending on the minimum criterion. The statistical dispersion indicator and the indicator of the cluster system entropy allow to perform a qualitative analysis of the macroscopic structure of oil systems by the means of simulating the growth of clusters when chaging physical and chemical properties of the gas system as well as technological parameters of the industrial process. For numerical experiments simulating the process of the thermal cracking and following thermocondensation of high-boiling fractions of petroleum hydrocarbons the author has defined technological parameters that trigger the growth of minor clusters with dense structures and, on the contrary, major clusters with less dense structures. The patterns described by the author are important for further usage of cracking residuals as the raw material for manufacturing petroleum coke with the set structure in the carbonic industry.
Citations count: 1
Reference:
Atadjanov J.A. —
Models of Morphological Analysis of Uzbek Words
// Cybernetics and programming.
– 2016. – ¹ 6.
– P. 70 - 73.
DOI: 10.7256/2306-4196.2016.6.20945 URL: https://en.nbpublish.com/library_read_article.php?id=20945
Read the article
Abstract:
The subject of the research is the models and algorithms of the morphological analysis of texts, the category of suffixes and rules for using them in the Uzbek language. The object of the research is the processes of defining roots in Uzbek sentences according to morphological rules of the Uzbek language without using additional dictionaries. Developed methods and models are oriented at specific features of the Uzbek language, its structure, peculiarities of word forms for further comparison, standardization and search for analogous texts in data bases. Research methods used by the author include morphological analysis of texts, abstract programming, method of finite state machines and flow charts and methods of mathematical modeling. In order to create the antiplaque program it is important to research the specific characters of the language of a text. The article presents an approach to morphological analysis of Uzbek words. The approach is based on the analysis of words according to the finite state machines (FSM) method and based on defining the root of the word according to the word order in Uzbek language.
Citations count: 1
Reference:
Surma I.V. —
The modern information society and topical issues of knowledge
// Cybernetics and programming.
– 2015. – ¹ 3.
– P. 30 - 46.
DOI: 10.7256/2306-4196.2015.3.15001 URL: https://en.nbpublish.com/library_read_article.php?id=15001
Read the article
Abstract:
The article deals with the fact that at the present time knowledge is, on the one hand, the information used for decision making and on the other, as assets, directly involved in the production of tangible or intangible benefits thus becoming a commodity. The author highlights that in this way an traditionally managed organization, for which the most important resource is the capital, turns into a knowledge oriented organization. So the main factor of competitiveness of a country today is the ability to acquire and use knowledge. The article also reveals a fundamental difference between the information society and the knowledge-based society. The author points out, that shift from the first type of society to the second type requires very responsible approach to the creation of knowledge and its dissemination. The article uses the most comprehensive and integrated approach to measuring "the economy based on knowledge", developed by UN experts. The approach is based on the four basic elements, including dynamic innovation infrastructure and innovation systems. A key conclusion of the article in the fact that approach to knowledge as an asset is the basis of a modern the knowledge-based economy. Asymmetry of knowledge between organizations is a key factor for competitive advantage. The processes of integration and significant pace of changes in technology so greatly influence the characteristics of the processes of globalization that global knowledge economy emerges. The author notes, that changes take place not only in ways of managing, but also in fields of culture and public relations, including the corporate culture of the organization and human resource management.
Citations count: 1
Reference:
Galanina N.A., Ivanova N.N. —
Analysis of the effectiveness of synthesis of computing devices for non-positional digital signal processing
// Cybernetics and programming.
– 2015. – ¹ 3.
– P. 1 - 6.
DOI: 10.7256/2306-4196.2015.3.15354 URL: https://en.nbpublish.com/library_read_article.php?id=15354
Read the article
Abstract:
The article researches methods, algorithms and computing devices for encoding, digital filtering and spectral analysis of signals. The subject of the study is methods of synthesis and analysis of devices for signals digital filtration and spectral analysis in system of residual classes. The proposed article presents efficiency analysis of synthesis of computing devices for non-positional digital signal processing in the system of residual classes. The authors show results of comparative evaluation of performance of computing devices for digital filtering and spectral analysis. The authors propose a method of increasing the speed of digital devices in the system of residual classes. The research is based on the apparatus of mathematical analysis, mathematical logic, theory of algorithms, theory of algebraic integers, automata theory, the theory of the discrete Fourier transform and fast variations, probability theory, mathematical methods and simulation. The study presents ways of solving the problem of implementation of digital signal processing algorithms in system of residual classes on modern signal processors taking into account peculiarities of the system of residual classes. Implementation of digital devices on digital signal processor intended for data processing in non-positional number systems, including system of residual classes, is a promising line of development digital signal processing devices.
Citations count: 1
Reference:
Raeckiy A., Shlyanin S., Ermakova L. —
The development of "Portfolio SibGIU" plug-in for “Moodle” learning management system
// Cybernetics and programming.
– 2016. – ¹ 2.
– P. 52 - 61.
DOI: 10.7256/2306-4196.2016.2.18016 URL: https://en.nbpublish.com/library_read_article.php?id=18016
Read the article
Abstract:
The object of the research is electronic portfolio of a student, the subject of the research is development of an information system allowing to form an electronic portfolio characterizing individual achievements of a student in six areas of activity: "Educational activities in primary education program"; "Research activities"; "Other educational achievement"; "Social activity"; "Cultural and creative activity"; "Sports activities". Efficient operation of the information system required differentiation of access permissions for different user groups: students, portfolio moderators and administrators. The information system is implemented as a separate module, plug-in for “Moodle” learning management system. At creation an information system "Portfolio SibGIU" authors used deduction method, in which for a variety of private signs a conclusion about the common set of studied attributes is made. The authors also analyzed existing plug-in “Exabis EPortfolio”. The main result is the development and implementation of "Portfolio SibGIU" information software system, which is currently being used and is a part of the electronic educational environment of Siberian State Industrial University. An important feature of the system is a dynamic formation in the portfolio in the "Educational activities in primary education program" category by synchronizing uploaded to Moodle works (files) of a student, the results of passing the tests in Moodle and reviews and final grades on completed assignments given to a student by a teacher in Moodle e-course. Using of the information system showed that the work with the system does not pose any difficulties for the students or for the moderators. The student is able to upload the documents confirming personal achievements in various fields into the portfolio. Every student action is reviewed by a moderator, which improves the quality of content and helps avoiding errors in the formation of the portfolio. "Portfolio SibGIU" information system meets the requirements of the FGOS 3+ and provides the accumulation, classification and registration of a set of electronic documents describing the student's individual achievements in various fields of activity.
Citations count: 1
Reference:
Kolomoitcev V.S. —
Comparative analysis of approaches to the organization of secure connection of the corporate network nodes to public network
// Cybernetics and programming.
– 2015. – ¹ 2.
– P. 46 - 58.
DOI: 10.7256/2306-4196.2015.2.14349 URL: https://en.nbpublish.com/library_read_article.php?id=14349
Read the article
Abstract:
The purpose of the study is in increasing the protection of nodes when accessing resources of an outside network. The objects of the study are the schemes of secure access from corporate network nodes to the information in the external network via "Direct connection" and "Connecting node". The study of these schemes is carried in terms of improving the security of the terminal node of corporate network, convenience and quality of organizing access of this node to the external network resources, as well as the complexity of the implementation of these schemes. In addition, the paper considers the possibility of protecting corporate network nodes from DDoS-attacks. The basis of the research is in the method of comparative analysis, which allows to reveal the advantages and disadvantages of each of the schemes. Based on the results it can be concluded that the scheme of "Direct connection" should be used in the following cases. Firstly, when it is impossible to make significant changes to the existing network architecture. Secondly, when the organization has limited financial resources. And thirdly, if it is necessary to work with external network resources in real time. Scheme "Connecting node", in contrast, requires (radical) rebuilding of network architecture and significant financial costs, but allows a much greater extent than the scheme of "direct connection", protecting an organization from the threats from the external network.
Citations count: 1
Reference:
Pesterev E.V., Klyushin Y.G. —
Decision-Making Simulation Based on Multidimensional Data Analysis
// Cybernetics and programming.
– 2016. – ¹ 3.
– P. 54 - 65.
DOI: 10.7256/2306-4196.2016.3.18956 URL: https://en.nbpublish.com/library_read_article.php?id=18956
Read the article
Abstract:
Methods of multidimensional data processing in the decision making processes are an essential part of the business-processes analysis. In this research the authors intend to analyze multidimensional operations when numerous alternatives are presented. Thus, the subject of the research is the decision making process based on the analysis of multidimensional data received from the system functioning statistics. The authors suggest to analyze the subject of the research that is proposed in terms of statistical data processing from the point of view of the general approach, in particular, within the framework of the image discrimination theory and chemometrics. The authors of the article offer particular methods for generating and processing statistical data. These methods involve developing a multidimensional data structure followed by processing its production function (certainty function). The authors suggest to analyze oiriginal features constituting selected data from the point of view of the dominating motivation principle that is mathematically demonstrated as the mutual influence between these features as well as on the decision to be made. To verify the methods, the authors have performed a number of numerical experiments aimed at both comparing developed algorithms reflecing the authors' approach with Bayesian approach and comparing different production functions. A number of experiments intend to find out the arithmetical mean of several generated random numbers. As a result, the authors have proved the dependence of the number of correct responses on constitutive parameters (the number of objects, the number of features, the volume of the selected data, the number of possible values for each feature). The results of the research demonstrate the better practice of classifying objects based on the authors' methods compared to the classification based on the probability approach. The results can be used to solve a wide range of tasks that are not directly related to the decision making process but deal with multidimensional data analysis.
Citations count: 1
Reference:
Golik F.V. —
Pearson distributions of sum of single distributed independent random variables.
// Cybernetics and programming.
– 2017. – ¹ 2.
– P. 17 - 41.
DOI: 10.7256/2306-4196.2017.2.22583 URL: https://en.nbpublish.com/library_read_article.php?id=22583
Read the article
Abstract:
The article is devoted to working out the constructive method of approximation the sum of independent random variables with the same distribution by Pearson curves. The summation theory was and still is one of the key parts of the theory of probability. The limiting theorems are proven within this theory, and they allow one to understand which frequencies may be used for the approximation for the sum so random values with large m. At the same time the approximation error is evaluated by the admissible error. However, in most practical cases the number of the summed values is not large, so the admissible error evaluation may not be sufficiently precise. The purpose of the study is to develop a constructive method for the approximation of the frequency function for the spread of the final sum of the independent random values with the same frequency. The Pearson curves are then used as approximative frequencies. Such an approximation lacks the defects related to the application of limiting theorems. It is applicable for any number of summed accidental frequencies m>1. The calculated ratios for the initial moments of the final sum of independent random variables are obtained. It is shown that the parameters of the Pearson curves for the sum m of random variables are related by simple ratios with the corresponding parameters of the summed value. The solution used in order to achieve the goal is based upon the moments method. Thå author offers a recursion formula for calculating the starting moments of for the sum of independent random values, allowing to find the central moments of the sum, as well as the parameters for the Pearson curves. It is proven that there's a dependency between the distance from the point of The exact expression for the distance from the point, corresponding to the distribution of the sum of the random variables in the coordinate system of Pearson parameters to the point (0, 3), corresponding to the normal distribution is found. By the distance value, one can indirectly assess the possibility of applying normal approximation. The author studies the possibility for the approximation of Pearson curves with normal distribution. An approximate formula for estimating the error in approximating the sum of random variables by normal distribution is given. The author provides examples of approximations for the distribution of the sum of random variables are found, which are often met in statistical radio engineering tasks. The reference materials include complete formulae for the key types of Pearson curves. All the obtained results are applicable for any random variables having finite first four initial moments. The correctness of the conclusions is confirmed by numerical calculations performed in the MathCad program.
Citations count: 1
Reference:
Abramova O.F. —
Visualization of the web-system user behavior pattern
// Cybernetics and programming.
– 2019. – ¹ 3.
– P. 43 - 52.
DOI: 10.25136/2644-5522.2019.3.23017 URL: https://en.nbpublish.com/library_read_article.php?id=23017
Read the article
Abstract:
Usability evaluation of web information systems is relevant and useful both for developers and for customers. One of the most important ways of evaluating the performance of information web-systems can be considered to estimate the so-called behavioral factors. Almost all currently used methods involve significant financial and time investments, but also depend on the number and level of knowledge of the participants specifically recruited focus groups. This is a rather laborious process, not always bringing the expected result with a clear evaluation and recommendations for improving the conversion of web-system. Therefore, the maximum visualization of the results of the assessment will significantly increase the information content and reduce the complexity of the evaluation of the system under study. For example, visualization of the pattern of behavior of user of the web-system. The display of the list of visited pages and the pages from which the user came to the website, will assess a few important indicators: the depth of view, the source of referrals to the site, as well as to simulate and evaluate the pattern of user behavior in the system. Moreover the clarity and the simplicity of the resulting circuit allows to use it both for professionals and owners of web-based systems. The article is devoted to description of methods for identifying usability problems of web systems, as well as description of how the program works to identify and visualize the behavior pattern of the user of based graph.
Citations count: 1
Reference:
Polevshchikova Y.A., Akbarov O.M. —
Forest cover assessment of the Volzhsk forestry of Mari El Republic using methods of remote sensing
// Cybernetics and programming.
– 2013. – ¹ 4.
– P. 59 - 65.
DOI: 10.7256/2306-4196.2013.4.9333 URL: https://en.nbpublish.com/library_read_article.php?id=9333
Read the article
Abstract:
The article provides an overview of existing international projects for vegetation assessment using remote sensing data. Particular attention is paid to the development of methodology for assessing the disturbance of forest cover from satellite monitoring and GIS technologies in modern software packages. One of the major projects that bring together experts, scientists and researchers in many countries in the field of satellite monitoring is a project NELDA (Northern Eurasia Land Dynamics Analysis). The aim of this study is to evaluate forest vegetation of the Volzhsk forestry of Mari El Republic using based on the analysis of nonsimultaneous multispectral Landsat satellite images of medium resolution using GIS technology. As a result of the study authors presents a review of existing GIS projects for for thematic mapping of forest vegetation based on the use of satellite imagery. The authors developed and applied a monitoring methodology for disturbed forest lands found using remote sensing data in the GIS environment.
Citations count: 1
Reference:
Sokol'nikov A.M. —
Mobile Learning: problems and perspectives
// Cybernetics and programming.
– 2013. – ¹ 6.
– P. 28 - 34.
DOI: 10.7256/2306-4196.2013.6.9668 URL: https://en.nbpublish.com/library_read_article.php?id=9668
Read the article
Abstract:
The article gives the definition of mobile learning and presents the results of the research on statistics of the mobile learning market carried out by Ambient Insight 2010 Marketing Agency among the students of the Volga State University of Technology showing the most popular ways of mobile phone usage in the learning process. The author makes the assumption of the reasons for the popularity of this type of education. The article reviews problems impeding the development of mobile learning, disadvantages of existing courses and presentations in Microsoft Power Point format. The author lists the leading companies in the world market of e-learning courses software development. As a result the author proposes a way to simplify the process of building a m-learning course and gives detailed step-by-step instruction for building a simple e-learning course available at the most popular mobile devises using the software for creating a Microsoft Power Point presentations and iSping presentation convertor.
Citations count: 1
Reference:
Ipatov Y.A., Krevetsky A.V. —
Methods of detection and spatial localization of groups of point objects
// Cybernetics and programming.
– 2014. – ¹ 6.
– P. 17 - 25.
DOI: 10.7256/2306-4196.2014.6.13642 URL: https://en.nbpublish.com/library_read_article.php?id=13642
Read the article
Abstract:
Modern systems of computer vision use intelligent algorithms that solve a wide class of problems from simple text recognition to complex systems of spatial orientation. One of the main problems for developers of such systems is in selection of unique attributes which remain invariant to various kinds of transformations. The article presents a comparative analysis of methods of detection and spatial localization of groups of point objects. The reviewed methods are compared by the performance and efficiency at specified dimensions. As of today there are no universal approaches to determine of such attributes, and its’ selection depends on the context of the problem being solved and on the registered conditions of observation. Various kinds of descriptors such as points, lines, angles and geometric primitives can be selected as dominating attributes. The authors study algorithms for detection of groups of point objects based on the minimum spanning tree (MST) and using a model of associated continuous image (ACI).
Citations count: 1
Reference:
Koronchik D.N. —
User interfaces of intelligent systems
// Cybernetics and programming.
– 2012. – ¹ 1.
– P. 16 - 22.
DOI: 10.7256/2306-4196.2012.1.13861 URL: https://en.nbpublish.com/library_read_article.php?id=13861
Read the article
Abstract:
The user interface is the only way the user interacts with the system software. Therefore, it should be fairly simple, intuitive and easy to learn. Designed by modern products user interfaces for the most part are fairly complex system. The main problem in these interfaces is that it is difficult to work with them for a user with a low level of skills. This in turn reduces the number of customers and reduces the operating efficiency. Designing user interfaces of intelligent system is more difficult for a number of reasons, which makes the development of technologies for their design more relevant. The article describes the principles and techniques that helps to design user interfaces for intelligent systems that can be easily integrated and are based on existing components. On the basis of the proposed technology user interfaces of some application systems were already designed. That makes it possible for authors to make conclustions about the performance of the proposed approach.
Citations count: 1
Reference:
Smirnov V.I. —
Evaluation of the security of voice data in a dedicated room using instrumental calculation method
// Cybernetics and programming.
– 2012. – ¹ 2.
– P. 18 - 24.
DOI: 10.7256/2306-4196.2012.2.13869 URL: https://en.nbpublish.com/library_read_article.php?id=13869
Read the article
Abstract:
Preventing the interception of confidential negotiations of the selected premises technical reconnaissance is one of the main directions in the field of technical protection of information. The need for measures to prevent the interception of voice information through technical channels due to a number of reasons is high. First, the speech information has specific features (confidentiality, efficiency, documentation and virtual). Second, the means used pickup of speech information in the acoustic channel leakage is relatively simple and cheap. Third, there is a constant improvement of TCP. Methods to reduce the possibility of interception of voice information of the allocated space, traditionally divided into passive and active. To assess the speech intelligibility author used subjective and objective methods. The most convenient and reliable method is considered subjective articulation method discussed in this article. The paper describes an instrumental calculation method used at present for the evaluation and monitoring of voice data security.
Citations count: 1
Reference:
Borodin A.V., Varlamov A.S., Korablev D.V. —
Educational proving ground for elaboration of technologies of exact time distribution
// Cybernetics and programming.
– 2015. – ¹ 3.
– P. 11 - 23.
DOI: 10.7256/2306-4196.2015.3.15438 URL: https://en.nbpublish.com/library_read_article.php?id=15438
Read the article
Abstract:
The paper deals with technologies of distribution of exact time in the data communication networks. In particular the technologies based on the Network Time Protocol (NTP) are considered. It is important to point out that this article has especially practical character, it is reviewing an implementation of a proving ground of testing of appropriate technologies. By proving ground the authors mean a set of software and hardware solutions which can be used in case of implementation of technology, and which can be integrated by any principle in rather independent stends. The composition of the stends which are a part of a polygon is considered. Examples of the organization of subnets - clients of system of distribution of exact time are given. The optimal version of the logical organization of a subsystem of time synchronization is offered. Ways of further development of a proving ground are planned. A methodological basis of this research is an experiment. The proving ground allows to simulate a huge number of configurations of the subsystem of distribution of exact time and to measure the different parameters of this subsystem. Auxiliary methodology of this research is the methodology of simulation modeling allowing to create optimum configurations for the purpose of practical confirmation of their relative efficiency. The proposed technical solution of the educational proving ground has no domestic analogs: it is unique both as a set ot technical means for support of educational process and from the point of view of creation of complexes of support of scientific researches in the field of distribution of exact time. Authors also present original solutions in the proving ground, monitoring of the environment and separate components of the equipment.
Citations count: 1
Reference:
Pavlov A.V. —
The Method of Defining the SDN Network Configuration Change
// Cybernetics and programming.
– 2016. – ¹ 4.
– P. 73 - 80.
DOI: 10.7256/2306-4196.2016.4.19516 URL: https://en.nbpublish.com/library_read_article.php?id=19516
Read the article
Abstract:
The subject of the research is the analysis of SDN network safety methods. One of such safety methods is the analysis of the current network configuration for a fast determination of changes and upkeep of the authorized status. Today SDN networks are gaining popularity therefore development of protection algorithms for such networks is a necessary step. SDN network approach to data transfer differs from that of traditional networks. Based on that fact, an important research goalo is either to define drawbacks of exisiting algorithms applicable to such networks or to develop new ones. Research goals include analysis of existing algorithms, search for solutions and adaptation of these solutions to initial tasks or development of a new solution. As a result of the research, the author describes a device that would ensure security of SDN network at the level of data transfer disregarding external factors. This would alllow to provide an independent evaluation of network security. When the network is being re-configured, all changes will be automatic or semi-automatic, thus they will not distort the authorized status of the network.
Citations count: 1
Reference:
Teplovodskii A.V. —
Motion simulator algorithm for aircraft guided missiles
// Cybernetics and programming.
– 2017. – ¹ 1.
– P. 48 - 60.
DOI: 10.7256/2306-4196.2017.1.20850 URL: https://en.nbpublish.com/library_read_article.php?id=20850
Read the article
Abstract:
The subject of the study is to develop algorithmic for test methods for aircraft guided missiles, based on the integration of differential equations in the Cauchy form adopted for the mathematical description of the movement of aircraft, which is the basis for modeling the motion of aircraft-guided missiles. The author suggests an approach to algorithmization of the flight path of aircraft guided missiles, which should be used in the development of methods of research and testing of aircraft guided missiles using modeling systems that allow reliably determining and assessing the compliance of the motion characteristics of the aviation guided rockets specified tactical and technical requirements. The research methodology is based on the methods of mathematical modeling, optimal control, computational mathematics, differential and integral calculus. The main result of the study is formed by the basic movement algorithm for simple model of aircraft guided missile based on the impact of wind disturbances. The algorithm allows increasing the complexity by including missile guidance algorithms and control algorithms that take into account the angular movement of the rocket center of mass, dynamic data sensors and steering dynamics rocket drives.
Citations count: 1
Reference:
Gutkovskaya O.L., Ponomarev D.Y. —
Using the orthogonal model of the telecommunications network for solving the problem of optimal traffic distribution
// Cybernetics and programming.
– 2017. – ¹ 1.
– P. 11 - 29.
DOI: 10.7256/2306-4196.2017.1.21810 URL: https://en.nbpublish.com/library_read_article.php?id=21810
Read the article
Abstract:
The subject of the study is the telecommunications network presented in the form of a set of queuing systems. As a result of the study, the authors present a method of analysis of obtaining a mathematical model of optimal distribution of traffic of telecommunications network using the criterion of a minimum number of packets in services throughout the network. Optimization of traffic is performed in two stages. At the first stage, a general optimal solution is sought. At the second stage, the routes between each source-receiver pair within the optimal solution of the first optimization stage are determined. Two-step optimization reduces the number of independent variables in the objective function found at the first stage of optimization. To obtain a mathematical model of the network, tensor analysis of complex systems simultaneously allowing finding linearly independent (phase) variables is performed. This approach made it possible to minimize the dimension and complexity of the problem being solved. Scientific novelty in this article is an algorithm for obtaining a mathematical model of a telecommunications network allowing to find the optimal distribution of information flows through communication channels. The peculiarity of this method is that instead of independent variables in the objective function authors use not all possible traffic routes between each source-destination pair but phase variables of contour and node intensities, which in general will be less than routes. This reduces the dimension of the objective function, and, consequently, accelerates the search for the optimal solution.
Citations count: 1
Reference:
Stepanov P.P. —
Solving Topical Issues of Oil Companies Using the Game Theory Method
// Cybernetics and programming.
– 2016. – ¹ 4.
– P. 11 - 17.
DOI: 10.7256/2306-4196.2016.4.20162 URL: https://en.nbpublish.com/library_read_article.php?id=20162
Read the article
Abstract:
The subject of the research is the problems of information security as well as development of the competitive strategy on the market of oil products for oil companies and implementation of the game theory method in the process of solving the aforesaid issues. Special attention is paid to the questions of providing information security. The author explaines the role of strategic interaction in the process of planning and developing company's competitive strategy. After that the author introduces the framework of the game theory in a form of a target function with a set game matrix which alllows to carry out simulation modeling when the actual basis is accumulated. The research methodology is based on the framework of the game theory, in particular, constant sum games, cooperative games and games with inexact information. The main conclusion of the research is the proof that it is possible to apply the game theory method to solving specific problems of oil industry. The author also offers to use the target function to fill in the game matrix for each problem and defines directions for further researches in this field using the game theory and simulation modeling methods.
Citations count: 1
Reference:
Deryugina O., Nikulchev E. —
Software tool for automated UML class diagram refactoring using given quality criteria
// Cybernetics and programming.
– 2017. – ¹ 1.
– P. 107 - 118.
DOI: 10.7256/2306-4196.2017.1.21934 URL: https://en.nbpublish.com/library_read_article.php?id=21934
Read the article
Abstract:
The article is devoted to the task of the automated UML class diagram refactoring, which is important for the development of tools for transforming UML models in the framework of the MDA approach. The authors formulate the problem of the automated UML class diagram refactoring, introduce the abstract UML Map data structure storing hash maps of the UML class diagram elements. This data structure allows analyzing and transforming UML class diagrams in a convenient way. The paper presents algorithms of UML class diagram analysis in order to apply Strategy and Interface Insertion transformations. Computational experiment demonstrated that the computational complexity of these algorithms is O(n). The proposed algorithms became a part of the UML Refactoring Tool, which allows user to import UML class diagrams from XMI format, to analyze and transform it (calculate metrics, receive transformation recommendations) and to export it back to the XMI format.
Citations count: 1
Reference:
Le V.N. —
Mechanism of inference a diagnostic solutions in remote medical diagnosis expert system of preliminary diagnosis
// Cybernetics and programming.
– 2015. – ¹ 1.
– P. 16 - 26.
DOI: 10.7256/2306-4196.2015.1.13722 URL: https://en.nbpublish.com/library_read_article.php?id=13722
Read the article
Abstract:
The article describes an approach in management of diagnostic solutions inference in a medical diagnosis expert system. The study reviews the features of the software implementation of presented models and algorithms. The author proposes a combination of the reverse and direct inference for diagnostic solutions. At reverse deduction requesting new information from the patient requires generation of additional questions on the leading symptom. At the same time, the direct inference allows calculation of the output integral estimates of detection of possible diseases at the onset of symptoms. To output decisions under uncertainty the Mamdani fuzzy model is applied. The author proposes a RETE-network for generating a diagnostic solution in a reasonable time. The described mechanism of inference a diagnostic solutions has the following advantages: it allows the direction of reasoning on the most promising ways of obtaining a diagnosis and determines the correct order questions on leading symptoms; it allows the diagnosis under uncertainty in information on symptoms and does not stop though the diagnostic decision may not be accurate; it allows to set a preliminary diagnosis in real time and makes a conclusion in a reasonable time after the symptoms have been entered.
Citations count: 1
Reference:
Suchkova E.A., Nikolaeva Y.V. —
Developing the Best Possible Data Storage Structure for Decision Support Systems
// Cybernetics and programming.
– 2016. – ¹ 4.
– P. 58 - 64.
DOI: 10.7256/2306-4196.2016.4.18281 URL: https://en.nbpublish.com/library_read_article.php?id=18281
Read the article
Abstract:
The article presents the results of the development and experimental comparison of data structures and data storage methods. The basis for building the models included the financial market decision support system and expert evaluations of the electronic tendering system. In both cases the authors built conceptual data models, stored data in text files, relational and non-relational databases and evaluated efficiency of an organized structure from the point of view of efficient storage and access, automatic integrity control and data consistency. By using theoretical methods (abstraction, analysis, synthesis, and idealization) the authors developed conceptual database models. In its turn, by using empirical methods (experiment and comparison) they checked the efficiency of data storage with the use of text files, relational and non-relational databases. As the main conclusion of the research, the authors provide recommendations on how to select the best data storage structures for electronic decision support systems. The experimental comparison allowed to discover that for a developed expert evaluation storage structure the relational database control system is the most effective method while in case of storing information about financial markets, it is better to use text files for a developed decision support system.