Reference:
Lizneva Y.S., Kostyukovich A.E., Kokoreva E.V..
Analysis of the possibilities of determining location in a Wi-Fi network using neural network algorithms
// Software systems and computational methods.
2024. № 4.
P. 1-12.
DOI: 10.7256/2454-0714.2024.4.72107 EDN: CSDXDU URL: https://en.nbpublish.com/library_read_article.php?id=72107
Abstract:
Indoor positioning on a Wi-Fi network belongs to a class of tasks in which the dependence of output characteristics on input variables is influenced by many parameters and external factors. When solving such problems, it is necessary to take into account that in determining the location, it is of significant interest not only to determine the static coordinates of an object, but also to predict the vector of its movements. In the case where the location of an object is determined only by the level of signal power received from several access points on a Wi-Fi network, the use of signal attenuation models that take into account the conditions of propagation of radio waves indoors is difficult due to the need for reliable information about the material of ceilings, floors and ceilings, the presence of fixed and mobile shading objects, etc. Since the electromagnetic environment inside the room varies depending on many factors, the above-mentioned models have to be adjusted to these changes. Since finding patterns in a large amount of data requires non-standard algorithms, artificial neural networks can be used to solve the positioning problem. It is important to choose a neural network architecture that can take into account changes in the signal strength received by a mobile device from Wi-Fi access points. Before training a neural network, statistical data is preprocessed. For example, abnormal cases are excluded from the machine learning dataset when the device detects a signal from less than three access points at one measuring point. As a result of the analysis of statistical data, it was found that the same distance between the measuring points leads to the fact that the neural network incorrectly determines the location of the object. The paper shows that in order to increase the accuracy of positioning the location in conditions of complex radio placement, when compiling radio maps, it is necessary to determine the optimal varying distances between measuring points. The conducted experimental studies, taking into account the proposed approach to optimizing the distances between measuring points, prove that the accuracy of location determination in the vast majority of measuring points reaches 100%.
Keywords:
machine learning, hidden layer, signal strength, neural network, RSSI, measuring point, positioning, Wi-Fi, training sample, training set
Reference:
Alpatov A.N., Bogatireva A.A..
Data storage format for analytical systems based on metadata and dependency graphs between CSV and JSON
// Software systems and computational methods.
2024. № 2.
P. 1-14.
DOI: 10.7256/2454-0714.2024.2.70229 EDN: TVEPRE URL: https://en.nbpublish.com/library_read_article.php?id=70229
Abstract:
In the modern information society, the volume of data is constantly growing, and its effective processing is becoming key for enterprises. The transmission and storage of this data also plays a critical role. Big data used in analytics systems is most often transmitted in one of two popular formats: CSV for structured data and JSON for unstructured data. However, existing file formats may not be effective or flexible enough for certain data analysis tasks. For example, they may not support complex data structures or provide sufficient control over metadata. Alternatively, analytical tasks may require additional information about the data, such as metadata, data schema, etc. Based on the above, the subject of this study is a data format based on the combined use of CSV and JSON for processing and analyzing large amounts of information. The option of sharing the designated data types for the implementation of a new data format is proposed. For this purpose, designations have been introduced for the data structure, which includes CSV files, JSON files, metadata and a dependency graph. Various types of functions are described, such as aggregating, transforming, filtering, etc. Examples of the application of these functions to data are given. The proposed approach is a technique that can significantly facilitate the processes of information analysis and processing. It is based on a formalized approach that allows you to establish clear rules and procedures for working with data, which contributes to their more efficient processing. Another aspect of the proposed approach is to determine the criteria for choosing the most appropriate data storage format. This criterion is based on the mathematical principles of information theory and entropy. The introduction of a criterion for choosing a data format based on entropy makes it possible to evaluate the information content and compactness of the data. This approach is based on the calculation of entropy for selected formats and weights reflecting the importance of each data value. By comparing entropies, you can determine the required data transmission format. This approach takes into account not only the compactness of the data, but also the context of their use, as well as the possibility of including additional meta-information in the files themselves and supporting data ready for analysis.
Keywords:
Apache Parquet, Integration of data formats, Data analysis, Analysis Ready Data, Metadata, Data processing, CSV, JSON, Data storage formats, Big Data
Reference:
Gusenko M.Y..
Creating a common notation of the x86 processor software interface for automated disassembler construction
// Software systems and computational methods.
2024. № 2.
P. 119-146.
DOI: 10.7256/2454-0714.2024.2.70951 EDN: EJJSYT URL: https://en.nbpublish.com/library_read_article.php?id=70951
Abstract:
The subject of the study is the process of reverse engineering of programs in order to obtain their source code in low- or high-level languages for processors with x86 architecture, the software interface of which is developed by Intel and AMD. The object of the study is the technical specifications in the documentation produced by these companies. The intensity of updating documentation for processors is investigated and the need to develop technological approaches aimed at automated disassembler construction, taking into account regularly released and frequent updates of the processor software interface, is justified. The article presents a method for processing documentation in order to obtain a generalized, formalized and uniform specification of processor commands for further automated translation into the disassembler program code. The article presents two main results: the first is an analysis of the various options for describing commands presented in the Intel and AMD documentation, and a concise reduction of these descriptions to a monotonous form of representation; the second is a comprehensive syntactic analysis of machine code description notations and the form of representation of each command in assembly language. This, taking into account some additional details of the description of the commands, for example, the permissible operating mode of the processor when executing the command, made it possible to create a generalized description of the command for translating the description into the disassembler code. The results of the study include the identification of a number of errors in both the documentation texts and in the operation of existing industrial disassemblers, built, as shown by the analysis of their implementation, using manual coding. The identification of such errors in the existing reverse engineering tools is an indirect result of the author's research.
Keywords:
CPU mode, x86 AMD documentation, x86 Intel documentation, assembler, microprocessor, assembler syntax, machine code syntax, command specification, software reengineering, disassembler
Reference:
Tikhanychev O.V..
On the use of modern technologies of information survey
// Software systems and computational methods.
2021. № 1.
P. 63-76.
DOI: 10.7256/2454-0714.2021.1.31229 URL: https://en.nbpublish.com/library_read_article.php?id=31229
Abstract:
The subject of this research is the process of software development for automated control systems. The object of this research is the initial stage of this process – information survey of the object of automation. At the present stage, despite availability of quite an extensive list of automation means for the formation of information models of automated objects, this process is not always structured rationally. One of the reasons lies in the absence of methodology for the use of modern technologies for building information models. For solution of this problem, the author formulates a scientific and practical task for improving the process of information survey of the objects of automation based on the modern approaches and specialized means for their implementation. The article employs the methods of analysis and synthesis. Based on the analysis of peculiarities of building information models of automated control systems, in addition to the “classical” iterative approach, the author synthesizes the algorithms that are based on the principles of building the model from the output document of the program and sequential description of the operator's actions in the software on the input of information in fulfilling the functional tasks. The implementation of the proposed algorithms in practice may ensure the effectiveness of using modern automation means for information survey, by inserting into the process of software creation for automated control systems.
Keywords:
survey information algorithms, modern survey principles, normative documents, survey automation, information survey, development stages, software development, application software, management automation, software development support
Reference:
Dushkin R..
Intellectualization of the management of technical systems as part of a functional approach
// Software systems and computational methods.
2019. № 2.
P. 43-57.
DOI: 10.7256/2454-0714.2019.2.29192 URL: https://en.nbpublish.com/library_read_article.php?id=29192
Abstract:
The article discusses certain issues of intellectualization of control of technical systems in the framework of a functional approach to the construction of intelligent control systems for various objects and processes based on systems engineering and complex engineering. Intellectualization of management allows you to simultaneously get all the benefits of various paradigms for considering processes of various nature, as well as emergent to show new properties of the general approach to increase the degree of controllability and efficiency of operation of technical control objects (and generally practically arbitrary control objects of a technical nature). The application of the functional approach in combination with complex equipment in the intellectual management of such objects as transport, buildings, energy, allows us to transfer their operation to a higher level of service availability, sustainability, environmental friendliness and comprehensive development of not only the control object itself, but also the hierarchy of its super-systems - municipality, region, state. The scientific novelty of the proposed approach lies in the new application of the mathematical apparatus in terms of the general theory of sets and category theory for organizing a distributed computing system in the field of intelligent buildings and managing their internal environment. An article can become the basis for novelty of a higher order in the transition from a systemic to an integrated approach. Furthermore, a systematic approach is used with the use of a simplified cybernetic "streaming" scheme of functioning of an intelligent building.
Keywords:
edge computations, intellectualization, artificial intelligence, decentralization, internet of things, functional approach, smart building, management, control system, system approach
Reference:
Zheltov V.P., Zheltov P.V..
Development tools for the Internet portal of the national corpus of the Chuvash language
// Software systems and computational methods.
2019. № 1.
P. 42-50.
DOI: 10.7256/2454-0714.2019.1.28131 URL: https://en.nbpublish.com/library_read_article.php?id=28131
Abstract:
The subject of the research is the substantiation of the technical and operational characteristics and development means of the Internet portal of the national Chuvash language corpus. The research methodology is based on a combination of theoretical and practical approaches using the methods of analysis, comparison, synthesis, synthesis, programming.The relevance of the study is due to the importance of studying and preserving the diversity of languages and cultures in the modern world and, accordingly, the need to develop tools for storing and processing texts in natural languages (including Chuvash), including those based on computer technologies.The novelty of the development of the Internet portal of the national corpus of the Chuvash language is determined by some features of the language. Scientific novelty associated with the development of the Chuvash language Internet portal, including a search engine, morphological analyzer, syntax analyzer, semantic analyzer, thesaurus. The structure of the portal - the main page, tabs "Search", "Morphological Analyzer", "Syntax Analyzer", "Semantic Analyzer", "Thesaurus". Development Tools: Microsoft .NET Framework; ASP.NET MVC technology; C# language; Visual Studio development environment; SQL database; ADO.NET; Entity framework; Java script; jQuery library and Angular JS framework; HTML/CSS.The Chuvash Internet portal provides researchers with a new toolkit that was not available to the Chuvash language before. The Internet portal allows you to test hypothetical theories using both feedback and modern formalized and quantitative methods. Due to this, a transition to a new qualitative level in lexicology and lexicography is possible: the work on compiling dictionaries and thesauruses of the Chuvash language, during which it is necessary to take into account both practical and theoretical components, will be greatly facilitated.
Keywords:
semantic analyzer, syntax analyzer, morphological analyzer, search engine, development tools, Internet portal, Chuvash language, Russian language, national corpus, thesaurus
Reference:
Guzii A.G., Kukushkin Y.A., Lushkin A.M..
Computer technology of pilot functional reliability prognostic evaluation
// Software systems and computational methods.
2018. № 2.
P. 84-93.
DOI: 10.7256/2454-0714.2018.2.22425 URL: https://en.nbpublish.com/library_read_article.php?id=22425
Abstract:
The subject of research is mathematical software prognostic evaluation of functional reliability of the pilot. The object of research is the functional reliability of the professional activities of the pilot. The authors consider in detail such aspects of the topic as automated assessment of the risk of an aviation event due to the release of flight parameters for operational limitations, understanding by the risk assessment the probabilistic measure of the occurrence of an aviation event of a fixed degree of severity due to exceeding operational limitations of the aircraft, in flight such an event (depending on the severity effects) is classified as an aviation event, subject to the investigation. The research methodology is based on the system approach and unites methods of probability theory, mathematical statistics, aviation cybernetics, psychophysiology of flight work. The main result of the study is a software-implemented technology for predicting the functional reliability of the pilot, implemented, allowing to implement an individual a priori risk assessment of an aviation event (incident) for a group of causative factors "crew" in the most critical stages of flight (takeoff and landing) Accumulated statistics of aviation events caused by the release of flight parameters for operational limitations, which is important To ensure proactive management of safety levels in the airline. The novelty of the research is that the technology of predictive estimation of the functional reliability of the pilot is developed on the basis of the concept of acceptable risk of an accident.
Keywords:
aviation risk management, flight safety, aeronautical cybernetics, pilot condition monitoring, pilot reliability monitoring, pilot reliability prediction, reliability of the pilot, predictive assessments, psychophysiology of flight work, probabilistic modeling
Reference:
Sinitsyn P.E..
Verification of Russian Internet sites compliance with WCAG standards through usability testing of the interface for visually impaired users
// Software systems and computational methods.
2018. № 1.
P. 95-99.
DOI: 10.7256/2454-0714.2018.1.25346 URL: https://en.nbpublish.com/library_read_article.php?id=25346
Abstract:
The article discusses electronic resources with support for the visually impaired interface, and also analyzes their compliance with the Web Content Accessibility Guidelines (WCAG) 2.0 standard and the user interface for visually impaired users of different levels. The purpose of the article is to check electronic resources for compliance with the information adaptation standard for visually impaired users and to comply with the requirements of the Resolution of the Government of the Russian Federation "On Approval of the State Program of the Russian Federation" Affordable Environment "for 2011-2020." The article considers models and methods for assessing the quality characteristics of the user interface , evaluation criteria for the identification of user groups of the visually impaired user in the user interface groups, and adaptation recommendations for the electron based on the standard WCAG 2.0 The results of usability testing on the criteria for assessing the user interface for visually impaired Internet users were obtained with the help of the results obtained in this article: without a tiflotechnical means, a disabled person with a color perception problem and a visually impaired user have limited access capabilities to the information, the sites of the Russian Internet space correspond to the minimum level of the standard WCAG 2.0.
Keywords:
interface evaluation, user's interface, WCAG, hyposeeing user, usability, special interface, WCAG levels, electronic resources adaptation, web accessibility, web content
Reference:
Glushenko S.A..
Analysis of software for implementing fuzzy expert systems
// Software systems and computational methods.
2017. № 4.
P. 77-88.
DOI: 10.7256/2454-0714.2017.4.24251 URL: https://en.nbpublish.com/library_read_article.php?id=24251
Abstract:
The research focuses on enterprises and organizations of various industries, leading project-oriented business. The subject of the study are the decision-making processes that are present in the implementation of various projects. Increasing the effectiveness of decisions can be achieved through the use of expert systems. At the same time, the expert system should be based on modern methods of processing information in conditions of uncertainty. The author suggests using expert systems based on methods and models of fuzzy logic. Particular attention in the article the author pays to the functional requirements, which must correspond to fuzzy expert system. The author examines in detail the existing list of software for the implementation of fuzzy expert system, and to identify the optimal software which meets the requirements, the method of analysis of complex systems by the criterion of functional completeness of Professor G.N. Khubaeva.As a result of the analysis it was established that the existing software solutions do not meet the functional requirements in many respects, therefore the development of a new and effective tool is an actual task.The analysis also made it possible to identify software tools with a similar set of functions, to estimate the degree of similarity and the degree of correspondence of the systems of the "reference" model of the information system that takes into account the requirements of the user.
Keywords:
absorption matrix, similarity matrix, matrix of superiority, quantitative estimation, functional completeness, fuzzy logic, risk, expert system, graphs, reference model
Reference:
Arzumanyan R.V., Sukhinov A.I..
Study the feasibility of high-performance software Google VP9 decoder.
// Software systems and computational methods.
2016. № 2.
P. 184-200.
DOI: 10.7256/2454-0714.2016.2.67838 URL: https://en.nbpublish.com/library_read_article.php?id=67838
Abstract:
The article is devoted to optimization and execution of parallel decoding stages of the video signal compressed in accordance with specification Google VP9. The authors in detail discuss the most time consuming stages of decoding and restoring a compressed video and study possible optimization and parallel execution of algorithms underlying such steps using both CPUs and graphics cards with general-purpose computing support. The article gives a comprehensive assessment of the characteristics of the decoding stages, including the requirements for processor and memory subsystem. The main method of the study is in carrying out a numerical experiment with the collection of information of interest and then analyzing the results. Gathering of information is implemented by modifying the source code reference codec and subsequent assembly into a software codec application. The novelty of the work lies in the fact that it carried out a comprehensive analysis of the possibility of computational methods lying in the codec based. The research evaluates the feasibility of parallel calculations, taking into account peculiarities of the target hardware (MCCPU and GPGPU). The authors performed an optimization of arithmetic decoding step taking into account the statistical characteristics of the distribution of the lengths of literals, decoded from a compressed bit stream.
In this article, the authors make conclusions regarding the most computationally complex decoding stages and the possibility of their optimization and parallel implementation, and analyze differences between the described codec a competing codec N265.
Keywords:
inter-frame prediction, optimization, memory access pattern, arithmetic coding, performance, Google VP9, codec, algorithm analysis, parallel programming, GPGPU
Reference:
Aref'ev R.A., Zudilova T.V..
SOA pattern of user interfaces design for multiplatform applications
// Software systems and computational methods.
2016. № 2.
P. 201-209.
DOI: 10.7256/2454-0714.2016.2.67839 URL: https://en.nbpublish.com/library_read_article.php?id=67839
Abstract:
The paper presents a new pattern for design service-oriented architecture (SOA) for multiplatform applications applied in the creating user interfaces of distributed applications. The research aims to: (1) analysis of existing approaches to the development of multiplatform user interfaces, (2) development of a new SOA pattern based on existing patterns to be used in the development of multiplatform interfaces, (3) case study, which consists of the implementation of the proposed SOA pattern in a distributed application and its validation. In this study, a methodology development and optimization of information systems proposed by J. F. Nunamaker has been used. This approach is iterative and involves three main stages: (1) gathering the information about current approaches to development of architecture of Multiple User Interface (MUI), (2) an experimental phase includes the synthesis possible architectural solutions, (3) development of a system prototype. In the developed pattern design a layout of visualization services, containing different variants of output and markup, is performed within a single application using monitoring and dynamic reconfiguration mechanism according to the characteristics of the client device. A search for relevant services over the internet and their installation is possible. The practical significance of the result of this work is in reducing the cost of software development and improve the quality of their user interface by using new SOA pattern.
Keywords:
dynamic configuration pattern, adaptive design, cloud information system, design pattern, Service-Oriented Architecture, distributed applications, user interface, SOA patterns, multi-platform development, human-machine interaction
Reference:
Kurakin P.V..
New-Generation Specialized Systems of Mathematical Calculations
// Software systems and computational methods.
2016. № 1.
P. 80-94.
DOI: 10.7256/2454-0714.2016.1.67600 URL: https://en.nbpublish.com/library_read_article.php?id=67600
Abstract:
In many sectors and state administration agencies there is a great need in the software designed for specialized calculations like a popular MATLAB system combined with the Simulink graphical programming environment, but dealing with a particular database and task descriptions and not requiring any special payment. The main disadvantage of the MATLAB+Simulink package (apart from its price) is that the Simulink library of graphical primitives is in fact limited by popular and typical engineering calculating tasks. The author of the article emphasizes the need to develop the kind of programming environment which would be based on the freely distributed software. The programming environment described by the author supports on the 'client-server' concept and bases on the Java platform and web-technologies. The client uses the visual graphics editor implemented as a browser application. The server transmits data (task configuration) to the Octave calcuation package and vice versa, calculation results are transmitted to the browser. Data is transmitted online as the JSON line. The author creates the original programming architecture for specialized mathematical calcuation systems based on the freely distributed software. Taking into account the subsystem that stores the task configuration (and needs to be developed further), this architecture becomes the basis for creating specialized systems of the decision making processes in many spheres. The architecture provides a wide range of opportunities for further development.
Keywords:
mathematical, calculations, Java, JavaScript, Octave, Python, systems, support, making, decision
Reference:
Mal'shakov G.V., Mal'shakov V.D..
Technique of normalization of the alphabet of search for quality improvement of entity identification based on data frequency characteristics
// Software systems and computational methods.
2015. № 4.
P. 407-413.
DOI: 10.7256/2454-0714.2015.4.67457 URL: https://en.nbpublish.com/library_read_article.php?id=67457
Abstract:
Using frequency distributions of data as identifier it is possible to find data of one system in other systems intended for interaction and coordinate their work. In this case entity identification of a subject domain is done using the alphabet of search. An alphabet of search is a set of lexemes with frequencies of their use in the data, stored as records of a relational database. Object of the research is a technique of normalization of the alphabet of search for improvement of quality of entity identification in a subject domain using frequency characteristics of their data. The technique requires deleting lexemes of the alphabet found in other lexemes of the alphabet with similar frequency of repetition in entity. The methods of the research include the system analysis, the theory of the information, the theory of algorithms, algebra of logic, the theory of sets, the comparative analysis, methods of the intellectual analysis of data and methods of development of the software and databases. The authors prove experimentally (on an example 178 entity), that the given technique allows to reduce the volume of the alphabet of search in 5 times on average, that considerably increases speed of identification entity under frequency characteristics of their data. By reducing the quantity of shorter lexemes the technique of normalization allows to reduce an error of recognition on average by 0.02036 per identification as shown by experiments.
Keywords:
correlation, frequency analysis of data, entity, search, the alphabet, normalization, database, software, identification, method
Reference:
Efimov N.A., Zolotov O.K..
Methods of determining the position of the optical axis in three or more landmarks
// Software systems and computational methods.
2015. № 3.
P. 323-329.
DOI: 10.7256/2454-0714.2015.3.67275 URL: https://en.nbpublish.com/library_read_article.php?id=67275
Abstract:
The subject of research is increasing the accuracy of determining the exact position of the optical axis of a movie (video) camera, required for its use in experimental studies and tests of complex systems. The calculation of the optical axis can be carried out at predetermined reference points (reference points). This approach is effective when using stationary optical measuring points. With regard to the use of movie (video) cameras the authors developed an original method for obtaining the angular coordinates using three reference points, which allows calculating the optical axis of a movie (video) camera on a priori given reference points. The research methodology combines the methods of computational mathematics, physical optics, analytical geometry, reliability theory, metrology and testing aircraft. The novelty of the research is in development of a method for calculating the angle of the camera only from the values of the angles between the axes and the projection of the optical axis on the coordinate plane. The calculation errors in the reference examples does not exceed three arc seconds, which satisfies the needs of many experimental studies and tests of complex systems.
Keywords:
conversion of coordinates, spatial position of the camera, angular coordinates, camera angle, reference points, space coordinates, definition optical axis, optical axis of the cameras, binding to the optical axis, positioning your camcorder
Reference:
Milovanov M.M..
Development and implementation of a software system for trading algorithm testing
// Software systems and computational methods.
2015. № 2.
P. 217-224.
DOI: 10.7256/2454-0714.2015.2.67103 URL: https://en.nbpublish.com/library_read_article.php?id=67103
Abstract:
The current development of the stock market requires development of infrastructure, software upgrades and new tools. In this regard the commission of trading on the stock exchange must also carry out the new methods. The development of modern information technology allows using computer resources for the study of the behavior of the stock market, for analysis and optimization of transaction algorithms on the stock market based on certain rules. The article describes the development of a tool for testing trading algorithms. As methodology the authors used the method of observation. As the object of observation the authors used financial assets such as stocks, futures, indexes. The article describes building of a software product that includes the features of a system of trading algorithm development combined with all the benefits of prototype software for creating, testing and optimization of trading algorithms, simplifying the end-user experience. The developed information system for testing trading algorithms has high performance due to use of LUA programming language.
Keywords:
stock market, C#, LUA, .NET, programming, software, data analisys, testing, optimizition, development
Reference:
Lushkin A.M..
Mathematical software of automated predictive flight safety monitoring
// Software systems and computational methods.
2015. № 1.
P. 108-117.
DOI: 10.7256/2454-0714.2015.1.66225 URL: https://en.nbpublish.com/library_read_article.php?id=66225
Abstract:
The paper presents a study of procedures of automated monitoring and regulation of the safety of aircraft, as well as procedures for the automated control of the changes in the air-operated transport system. Analysis of the dynamics of the safety of aircraft is required to obtain an objective assessment of the effectiveness of corporate safety management system and its planned development, that is a requirement of the standards of the International Air Transport Association and recommended practice of ICAO. A detection of unacceptable values in the current or forecasted level of the safety reflects the state of aviation transport system, characterized by a potentially high risk that requires a rapid response from analysis of aviation incidents and prerequisites to them to determining the cause of unacceptable changes with subsequent regulations in safety management. To solve this problem author uses results of regular monthly monitoring of the current level of safety in the airline statistics of aviation events registered in the safety management system of the company, which was then processed using the theory of probability and mathematical statistics. The novelty of the research is in created software for automated predictive safety monitoring of airlines using statistical detection of potentially dangerous changes in the level of safety, based on the analysis of retrospective information obtained as a result of monitoring of the safety of the flight, taking into account the requirements of international standards in the field of civil aviation.
Keywords:
monitoring of aircraft incidents, flight safety management, flight safety risks, statistical criterion, monitoring flight safety, statistical analysis, aviation avariology, management of risks, retrospective analysis, statistical forecasting
Reference:
Burakov S.V., Zaloga A.N., Pan’kin S.I., Semenkin E.S., Yakimov I.S..
Applying a self-configuring genetic algorithm for modeling the atomic crystalline structure
of a chemical compounds using X-ray diffraction data
// Software systems and computational methods.
2014. № 4.
P. 500-512.
DOI: 10.7256/2454-0714.2014.4.65867 URL: https://en.nbpublish.com/library_read_article.php?id=65867
Abstract:
the article is devoted to evaluation of possibility and effectiveness of the use of
self-configuring genetic algorithm of global optimization to automate the task of determining
the atomic crystal structure of new substances by its powder the x-ray diffraction. The suggested
version of the self-configuring genetic algorithm was studied on the problem of determining the
known crystal structure of a Ba2CrO4 chemical compound, which required finding the location
of 7 independent atoms in the elementary cell of the crystal. To analyze the effectiveness and determining the convergence rate of structural models to the true structure of the substance
in the process of evolutionary search the authors performed several dozen launches of selfconfiguring
genetic algorithm with different population sizes of structural models and types
of genetic operations. The essence of the self-configuration method is in the fact that choice
of optimal genetic operators of selection, crossbreeding and mutation from the suggested
set of possible variants is performed by the self-configuring genetic algorithm itself while
solving the problem. The probability for the operators of being selected to generate the next
generation of population of structural models adapts based on the success of evolution by
using these operators on the previous generation. This leads to the automatic selection of
the best operators providing convergence of structural models to the true crystal structure.
One of the main problems that prevent the use of stochastic evolution of genetic algorithms
for structure analysis is the need for a non-trivial empirical selection of genetic operators.
Applying the self-configuring genetic algorithm to automate the selection of optimal genetic
operators in the task of modeling atomic crystal structure of chemical compounds by the X-ray
diffraction data is suggested for the first time. In determining the crystal structure of Ba2CrO4
using self-configuring genetic algorithm the convergence rate to the true crystal structure
reached 80%. This creates the possibility of developing an automated evolutionary genetic
algorithm for structural analysis based on the X-ray diffraction data.
Keywords:
evolutionary algorithms, genetic algorithms, self-configuration of genetic algorithms, crystal structure, X-ray powder diffraction, full-profile analysis, determination of crystal structure, self-configuration, diffraction pattern, genetic operators
Reference:
Malykhin A.Yu., Slyusar’ V.V..
Implementation of a mobile application for measuring electric transport characteristics
// Software systems and computational methods.
2014. № 3.
P. 387-392.
DOI: 10.7256/2454-0714.2014.3.65652 URL: https://en.nbpublish.com/library_read_article.php?id=65652
Abstract:
The authors present a brief analysis of electric transport market and on this basis
make conclude the high relevance and demand for software applications for mobile devices,
measuring static and dynamic characteristics of electric vehicles. The article reviews modern
software and hardware for providing users of electric transport with relevant information
about the state of electric vehicles. The authors select the mobile operating system for the
realization of this idea. The algorithm of the mobile application is created considering the use
of hardware facilities to obtain the raw data. The authors build the application architecture
based on the main concepts and conventions of the selected mobile Android OS and study
of the technical documentation, review articles, book and publications on a given topic. The
article also demonstrates experiments on OS compatibility with hardware devices. The present
paper show the relevance of development the mobile application for measuring static and
dynamic characteristics of individual vehicles equipped with electric motors. The authors
give a brief analysis of competing hardware and software for the same purpose and create
the algorithm of the software application, which architecture is based on the methodology
of the Model-View-Controller.
Keywords:
OS Android, electric transport characteristics, electric transport, mobile applications, devices compatibility, software application algorithm, MVC architectural pattern, connecting USB-devices, USB-OTG, PowerWatcher
Reference:
Bakhrushin V.E..
Software implementation of non-linear statistics relationships analysis methods in the
R system
// Software systems and computational methods.
2014. № 2.
P. 228-238.
DOI: 10.7256/2454-0714.2014.2.65265 URL: https://en.nbpublish.com/library_read_article.php?id=65265
Abstract:
existing software for data statistical analysis (SPSS, Statistica etc.) usually offer
for defining correlations just methods applicable for finding linear relationships in numerical
data, along with some relation indicators for rank, qualitative and mixed data. However actual
relation between quantitative data is often nonlinear. This leads to the fact that present
means do not allow identifying such relations, which can lead to false conclusions about the
absence of correlation. An universal indicator of present statistical correlation between two
rows of numerical data is sample coefficient of determination. There are two approaches to
calculated that coefficient: first is based on the approximation of some unknown function with
piecewise constant function, second is based on the smoothing available data. The article
proposes software realization for both methods in R system. The advantage of this system is in the availability of a large number of specialized library functions for statistical analysis, as
well as in writing programs for non-standard tasks. Testing of the developed application on
model examples proved their correctness allowing the use for solving practical problems in
nonlinear correlation analysis.
Keywords:
nonlinear relationship, coefficient of determination, software, R programming Language, data smoothing, correlation ratio, Pearson correlation coefficient, testing, data grouping, piecewise constant function
Reference:
Giniyatullin V.M., Arslanov I.G., Bogdanova P.D., Gabitov R.N., Salikhova M.A..
Methods of implementation of ternary logic functions
// Software systems and computational methods.
2014. № 2.
P. 239-254.
DOI: 10.7256/2454-0714.2014.2.65266 URL: https://en.nbpublish.com/library_read_article.php?id=65266
Abstract:
the study uses initial data in form of truth table of three-dimensional function of
binary, ternary and mixed logics. Calculation of values of the functions is done by their geometric
interpretations, disjunctive / conjunctive normal forms, incompletely connected artificial neural
networks and perceptrons with hidden layer. The article in detail reviews the intermediate
computation results for all methods mention above. The authors study properties of functions
of mixed logic: binary-ternary and 3-2 logics in one-, two- and three-dimensions. The article
presents mutually equivalent implementations of logic functions in the form of disjunctive
normal form and in the form of incompletely connected artificial neural network. The authors
performed replacement of continuous activation function with ternary threshold function. The
study includes building disjunctive normal forms, direct synthesis of incompletely connected
artificial neural network weights matrix. The perceptron is trained with Back Propagation
algorithm. Some conclusions are formed based on the laws of mathematical induction. The
article shows that: 1. minimization of the quantity of neurons in perceptron’s hidden layer
implicitly leads to the usage of many-valued logics; 2. some functions of binary-ternary logics
may be used to build disjunctive forms; 3. a bijective way to convert disjunctive normal form
into the form of incompletely connected artificial neural network and vice versa exists; 4. in
one-dimensional 3-2 logic there are only eight functions and all of them are listed; 5. proposed
structure of incompletely connected artificial neural network may implement any function of
ternary logic in any dimensionality.
Keywords:
XOR problem, perceptron, separating hyperplane, activation function, perfect disjunctive form, binary-ternary logic, 3-2 logic, ternary logic, neural network training, Back Propagation algorithm
Reference:
Vinokurova S.E..
Modification of the method of navigation graph for path finding in 3D space
// Software systems and computational methods.
2014. № 1.
P. 109-124.
DOI: 10.7256/2454-0714.2014.1.64048 URL: https://en.nbpublish.com/library_read_article.php?id=64048
Abstract:
Path finding is a task of defining the optimal rout between two points in a space.
Use of path finding algorithms allows controlling the moving of a character in 3D space with
obstacles being avoided automatically, giving user the opportunity to fully immerse in the
simulated 3D reality since the solutions to the problems of avatar moves is provided by the
software environment itself. The article presents a modification of the navigation graph method
of path finding in 3D space by setting a separate navigation graph for each 3D object. In that
case each navigation graph specifies the paths inside and around a complex 3D object or a
path around a simple 3D object. The modified algorithm is of significantly less computational cost for setting navigation graph, provides a more natural path and allows finding a rout with
bypassing of moving objects. The proposed method is suitable for real-time applications and
specifically due to the optimizations suggested in this article.
Keywords:
path finding, navigation graph, algorithm, optimization, advanced method, A* algorithm, navigation mesh, obstacles, dynamic objects, 3d space
Reference:
Parfenov Yu.P., Devyaterikov D.A..
Scalable and fail-safe DBaaS storage of PostgreSQL product line
// Software systems and computational methods.
2014. № 1.
P. 125-130.
DOI: 10.7256/2454-0714.2014.1.64049 URL: https://en.nbpublish.com/library_read_article.php?id=64049
Abstract:
the global trend in software development is focused on the cloud-based SaaSprojects.
The traditional database servers are replaced by Database as a Service (DBaaS) in
order to store the data in cloud-based application. The service of data storage provides
transparent access to the database management system (DBMS) relieve the user from many
tasks of data administration. The market offers many solutions for cloud file storage of corporate
data. However, the relational databases are still in demand for solving the tasks of business
automation and the use of SaaS-applications in small and medium business highlights the
product’s cost lowering and leads to the choice in favor of open-source DBMS. Among such
products the RDBMS PostgeSQL has enhanced functionality. This article is devoted to the ways
of building a scalable and fail-safe DBaaS storage of PostgreSQL product line.
Keywords:
databases, file storage, corporate data, streaming database replication, operating data reserve, cluster, user, reliability, load testing, transaction
Reference:
Moskvichev A.M., Ipatov Yu.A..
Visualization of statistics with elements of Geographic Information System technology based
on the GeoFlow
// Software systems and computational methods.
2013. № 4.
P. 409-421.
DOI: 10.7256/2454-0714.2013.4.63916 URL: https://en.nbpublish.com/library_read_article.php?id=63916
Abstract:
the task of visualization of information leads to a problem of acceptable and illustrative
representation of the research results. Traditional tools for this area are not fully capable of
solving the problem of visualization. Thus the visualization of data with geographic binding requires
time, resources and in most special stuff training. At the same time GeoFlow instrument
allows to quickly and eff ectively visualize data, analyze it and place as layers above the map.
The aim of this research is to identify potential features and benefits of the studied software
module, as well as the possibility of using it in practice when analyzing multiparameter geospatial
data. The authors study modular multiparameter statistic data with geospatial binding as well as modern methods of its visualization. The scientific novelty of this study is in the formulation
of the problem and in the solution of the problem of data visualization with geospatial
binding that allows to visually display changes, its’ trends and principles. Data visualized with
the presented program allows building dynamic scenes that increase the informativeness and
the level of understanding giving an advantage over other types of visualization. The described
method also allows adding 3D-visualisations over the map (such as animated diagrams) that are
lacking in modern GIS such as ArcGis, MapInfo, etc.
Keywords:
data visualization, GIS technologies, geotagging, GeoFlow, cartographic base, geostatistics, data analysis, visualization techniques, multi-parameter data, visualization software
Reference:
Emaletdinova L.Yu. Novikova S.V..
Automated generation of Mamdani type fuzzy inference system
on the basis of the existing system of Takagi-Sugeno type
// Software systems and computational methods.
2013. № 2.
P. 151-159.
DOI: 10.7256/2454-0714.2013.2.63019 URL: https://en.nbpublish.com/library_read_article.php?id=63019
Abstract:
the article describes a method of determining the parameters of Mamdani fuzzy inference system
based on its identity with the Takagi-Sugeno system. The author substantiates the properties that can be
used as universal apprximators for both systems. The article introduces an algorithm of formation functions
of membership of the right-hand parts of the Mamdani rules and the method of formation of the
system as a whole.
Keywords:
Software, fuzzy, logic, Mamdani, Takagi, approximator, Sugeno, automatization, identity, output.
Reference:
Galanina N.A., Dmitriev D.D..
Synthesis of FFT on FPGA using the system of residual classes
// Software systems and computational methods.
2013. № 1.
P. 129-133.
DOI: 10.7256/2454-0714.2013.1.62455 URL: https://en.nbpublish.com/library_read_article.php?id=62455
Abstract:
the presence of the DSP-block and a large number of I/O ports in modern FPGAs
allows to use them for a successful synthesis of the digital signal processing algorithms. The
system of residual classes involves multithreading of computations, thus FPGAs are very well
suited for the implementation of these algorithms, since the computations in the channels of
the system of residual classes are carried out in parallel and independent from each other. It is
known that the FPGA is a microchip, the logic of which is not determined on creation but is set
up by programming in specialized software, such as Quartus II. Developers of the special processors
DSP are highly interested in the realization of FFT using the system of residual classes
on FPGA Altera Cyclone II.
The article shows the results of the development of the conriguration file for realization of FFT
using the system of residual classes on FPGA Altera Cyclone II in the development environment
Altera Quartus II using the Verilog language of the hardware description. The author describes
the FPGA work under the developed configuration file. The article presents timing characteristics
and estimated calculation errors.
Keywords:
Software, FPGA, residual classes system, fast Fourier transformation, discrete Fourier transformation, Verilog, Quartus II, configuration file, residual classes system module, residual classes system deduction
Reference:
A.G. Korobeinikov, I.G. Sidorkina, S.Yu. Blinov, A.V. Leiman.
Algoritm klassifikatsii informatsii dlya resheniya zadachi fil'tratsii nezhelatel'nykh soobshchenii
// Software systems and computational methods.
2012. № 1.
P. 89-95.
DOI: 10.7256/2454-0714.2012.1.61566 URL: https://en.nbpublish.com/library_read_article.php?id=61566
Keywords:
Programmnoe obespechenie, Klassifikatsiya informatsii, spam, metod opornykh vektorov, feierovskie otobrazheniya