Reference:
Alekseev K..
Relational database problems
// Cybernetics and programming.
2020. № 2.
P. 7-18.
DOI: 10.25136/2644-5522.2020.2.34076 URL: https://en.nbpublish.com/library_read_article.php?id=34076
Abstract:
The relevance of this article lies in the fact that today's databases are the basis of numerous information systems. The information accumulated in them is extremely valuable material, and today database processing methods are widely spread in terms of extracting additional methods, knowledge from them, which are interconnected with generalization and various additional methods of information processing.The object of research in this work is relational databases and DBMS, the subject of research is the features of their use in applied programming.In accordance with the set goal, it is necessary to solve the following tasks:1) to consider the concept and essence of a relational database;2) to analyze the problematic aspects of relational databases in modern conditions. Relational databases are among the most widespread due to their simplicity and clarity at the creation stage and at the user level. It should also be noted that the main advantage of RDB is its compatibility with the main query language SQL, which is intuitive for users.Nevertheless, with all the variety of approaches, there are still some canons, violation of which greatly affects both the design of the database and its operation. For example, the problem of database normalization is very relevant. Neglecting normalization makes the database structure confusing and the database itself unreliable.Promising directions include the development of queries to a relational database using heuristic methods, as well as the method of accumulating previously optimized queries with subsequent verification of the derivability of the current query from the accumulated ones.Finally, a very slow decline in relational databases is probably happening. While they are still the primary storage medium, especially in large enterprise projects, they are gradually being replaced by non-relational solutions that will become the majority over time.
Keywords:
undefined values, identifying relationship, data integrity constraints, tuples, DBMS, relational databases, normalization, denormalization, foreign key, primary key
Reference:
Selishchev I.A., Oleinikova S.A..
Design of the database structure for software optimization of operation of the stochastic multiphase systems
// Cybernetics and programming.
2020. № 2.
P. 42-55.
DOI: 10.25136/2644-5522.2020.2.34099 URL: https://en.nbpublish.com/library_read_article.php?id=34099
Abstract:
The object of this research is the service systems that receive a stream of requests on their input, which represents a range of mutually dependent operations of the “finish” – “start” type. The time of conducting separate operations is a random variable, and the delivery itself requires the use of one or several types of resources. It is suggested that there are timeframes for processing the request. The goal of this research is to develop the database structure that would allow storing information on the incoming projects, operations, mutual dependence, used resources and specialists. The design of logical structure of the database was carried out using the methodology “essence – link”, which determines the data values in the context of their correlation with other data. The analysis of specificity of the object of research revealed a range of requirements submitted in the database. Leaning on these requirements along with considering normalization of relations used in the theory of relational databases, the author designed the universal structure from the perspective of its application, support of the analysis of the scheduling process, and the entirety of peculiarities of the object of research. Such database structure can be used in different fields that allow decomposition of the project into multiple separate interdependent tasks, not requiring major modifications. The article provides the examples of using the database for information systems in construction sector, as well as for the development of IT projects.
Keywords:
Interdependence of works, Stochastic parameters, Multiphase systems, Normal forms, Entity-relationship, Logical data model, Database, Temporary restrictions, Schedule, Project management
Reference:
Levina T.M., Popov A.S., Filippov V.N..
Control system of the database of service of cars at the industrial enterprise
// Cybernetics and programming.
2019. № 3.
P. 29-37.
DOI: 10.25136/2644-5522.2019.3.19352 URL: https://en.nbpublish.com/library_read_article.php?id=19352
Abstract:
At any oil and gas enterprise employs a large number of motor vehicles. Each mode of transport is maintenance, which includes a wide range of works, both large and small. And to control all the maintenance of a large fleet businesses need a software solution that would ensure its control and data processing speed. The basic solution of the problem are less effective than standardized solutions developed with interoperability with corporate information systems.It is proposed to develop a database management system for record-keeping: the work done; spent materials and spare parts. Also it can be used in the determination of additional awards for the employee, you can determine what works for database work performed.A conceptual model for building complexAutomated Information Systems and Algorithmsinteraction of various modules of the system: the database and MS Excel.The model of permanent users remote access to the resources of vehicle maintenance management system.The model of differentiated user access (authentication mechanism) to the system.
Keywords:
relational database, service, automation, database control system, conceptual model, accounting, database, car park, oil and gas company, Microsoft SQLServer
Reference:
Gribanova-Podkina M..
Database Connection Technologies from JSP Pages and Java Web Application Servlets
// Cybernetics and programming.
2019. № 2.
P. 73-85.
DOI: 10.25136/2644-5522.2019.2.19589 URL: https://en.nbpublish.com/library_read_article.php?id=19589
Abstract:
The purpose of the study is to demonstrate the diversity of solutions on the issue of connecting to a database, including a description of the developed connection controller class, as well as various ways to create connection pools on a web server and application servers. The article discusses the practical issues of using JDBC technology when building a Java web application. In the examples, the presentation and business layer of the application are developed using JSP-pages and servlets, the database operates on the MySQL platform. The described methods for creating and configuring a connection pool are shown on the example of the ApacheTomcat web server and the GlassFish application server. The question of optimizing database connections in Java applications remains open, despite the diversity of solutions. The study examines and proposes methods for constructing classes of connectors, various methods for creating pool connections, and describes the results of solving problems arising from the implementation of the described techniques. A detailed classification of ways to connect to the database is given.
Keywords:
GlassFish, MySQL, application server, web server, connection pool, JDBC, database, web application, connection controller, data source
Reference:
Bodrina N., Sidorov K., Filatova N., Shemaev P..
Software complex for the formation of situationally conditioned patterns of physical signals
// Cybernetics and programming.
2018. № 6.
P. 87-97.
DOI: 10.25136/2644-5522.2018.6.28151 URL: https://en.nbpublish.com/library_read_article.php?id=28151
Abstract:
The subject of research is the task of creating tools for the formation of information resources with samples of physical signals recorded by a person experiencing an emotional reaction caused by a certain informational stimulus. The results of the analysis of the most well-known national databases with examples of emotional reactions in the patterns of English and French speech, photographs of people, samples of cardiograms, galvanic skin reactions, heart rate and other physical signals are presented. The structure of the new hardware-software complex for the formation and maintenance of an open information resource that integrates examples of recordings of Russian speech with recordings of other physical signals recorded by a person during emotional reactions of a different sign is considered. Conducted field experiments with hardware and software. For the formation of vector patterns of patterns of physical signals used methods of spectral analysis and nonlinear dynamics. The database is developed using systems analysis methods. The new results include the structure of software and information support; features of the methodological support, allowing to register objectively confirmed changes in the emotional state of a person, features of technical support supporting registration of biomedical signals through five channels: video, audio, electroencephalogram, electrocardiogram, electromyogram, as well as the structure and features of an open online version of the multimodal emotion base. The creation and periodic updating of the content of the database of patterns of situational responses makes available to all interested users complete information on each experiment, including recordings of speech and physical signals, as well as data on the methodology of experiments and observation protocols.
Keywords:
base of emotional reactions, stimulated emotion, electroencephalogram, Russian speech, emotional reaction, software complex, database, attractor, base of situational responses, toolkit for filling the database
Reference:
Raikhlin V.A., Minyazev R.S., Klassen R.K..
The efficiency of a large conservative type DBMS on a cluster platform
// Cybernetics and programming.
2018. № 5.
P. 44-62.
DOI: 10.25136/2644-5522.2018.5.22301 URL: https://en.nbpublish.com/library_read_article.php?id=22301
Abstract:
The results of original research on the principles of organization and features of the operation of conservative DBMS of cluster type are discussed. The relevance of the adopted orientation to work with large-scale databases is determined by modern trends in the intellectual processing of large information arrays. Increasing the volume of databases requires them to be hashed over cluster nodes. This necessitates the use of a regular query processing plan with dynamic segmentation of intermediate and temporary relationships. A comparative evaluation of the results obtained with the alternative "core-to-query" approach is provided, provided that the database is replicated across cluster nodes. A significant place in the article is occupied by a theoretical analysis of GPU-accelerations with respect to conservative DBMS with a regular query processing plan. Experimental studies were carried out on specially developed full-scale models - Clusterix, Clusterix-M, PerformSys with MySQL at the executive level. Theoretical analysis of the GPU-accelerations is performed using the example of the proposed project Clusterix-G. The following are shown: the peculiarities of the behavior of the Clusterix DBMS in dynamics and the optimal architectural variant of the system; Increased "many times" scalability and system performance in the transition to multiclustering (DBMS Clusterix-M) or to the advanced technology "core-to-query" (PerformSys); Non-competitiveness of GPU-acceleration in comparison with the "core-to-query" approach for medium-sized databases that do not exceed the size of the cluster's memory, but do not fit into the GPU's global memory. For large-scale databases, a hybrid technology (the Clusterix-G project) is proposed with the cluster divided into two parts. One of them performs selection and projection over a hashed by nodes and a compressed database. The other is a core-to-query connection. Functions of GPU accelerators in different parts are peculiar. Theoretical analysis showed greater effectiveness of such technology in comparison with Clusterix-M. But the question of the advisability of using graphic accelerators within this architecture requires further experimental research. It is noted that the Clusterix-M project remains viable in the Big Data field. Similarly - with the "core-to-query" approach with the availability of modern expensive information technologies.
Keywords:
Performance, Scalability, Dynamic relationship segmentation, Host hashing, Regular processing plan, Big Data, Conservative DBMS, Multiclusterisation, Advanced technology, GPU-accelerators efficiency
Reference:
Lobanov A.A., Filgus D.I..
The method of searching for the shortest Hamiltonian path in an arbitrary graph based on the rank approach, which provides high efficiency and small error in solving the problem of organizing the process of managing multiple transactions and queries when they are implemented in network databases
// Cybernetics and programming.
2018. № 5.
P. 63-75.
DOI: 10.25136/2644-5522.2018.5.26513 URL: https://en.nbpublish.com/library_read_article.php?id=26513
Abstract:
The object of research is the workload implementation management subsystem in a network database. The subject of research is the management of the formation of the schedule for the implementation of subscriber requests and transactions in a network database. In many cases, existing solutions do not provide the necessary results in terms of access time and accuracy of the found solution. There is a need to develop a method for scheduling the implementation of user and transaction requests. Particular attention is paid to the algorithms of sampling queries in network databases, as well as the conceptual model of the process of managing transactions and queries. We use methods of graph theory. The evaluation of the effectiveness of the task solution was performed using a systems approach, system analysis and the theory of operations research. Processing of experimental data obtained during the work was carried out in accordance with the provisions of mathematical statistics. A method has been developed for finding the shortest Hamiltonian path in an arbitrary graph based on a rank approach, which provides high efficiency and small error in solving the problem of organizing the process of managing multiple transactions and queries when they are implemented in network databases. Using the developed method allows minimizing idle time of computing devices, reducing the volume and time of data transfer from one device to another, increases overall scalability, minimizes data access time, etc. An important advantage of the proposed method is to reduce the number of elementary operations and the number of vectors being processed the queue of the operations of the request, which leads to a significant reduction in time to implement the procedures for the formation of echer di operations in the requests.
Keywords:
transaction, stub tree, network database, rank approach, query, Hamiltonian path, graph, rank, optimization, deviation in measurements
Reference:
Belikova M.Y., Karanina S.Y., Karanin A.V., Glebova A.V..
Visualization and analysis of WWLLN network data on the territory of the Altai-Sayan region using Web-GIS
// Cybernetics and programming.
2018. № 2.
P. 1-8.
DOI: 10.25136/2644-5522.2018.2.25405 URL: https://en.nbpublish.com/library_read_article.php?id=25405
Abstract:
At present, the technology of creating information and analytical systems in the field of climate-ecological monitoring has been developed quite well. The construction of such systems is based on the use of GIS and Internet technologies and includes both data from monitoring stations and remote sensing data. The article describes the architecture of a web application that implements elements of GIS technologies and is developed to solve the tasks of collecting, storing, visualizing, searching and analyzing information on lightning discharges recorded by the World Wide Lightning Location Network (WWLLN). The software and technology platform of the system is based on the use of freely distributed technologies and software, including the Ubuntu operating system, the NGINX web server, Python as the main development language and the Django framework, the PostgreSQL / PostGIS database, the GDAL libraries, the OpenLayers. The WWWLNN archive data and the results of clustering are included in the web-GIS database. The system provides the feature of selecting information about lightning discharges, as well as performing cluster analysis for the sample obtained. The developed web-based GIS can provide specialists with convenient web-based tools for using WWLLN data to study regional climatology of lightning activity.
Keywords:
web application, service-oriented architecture, geospatial data, web-service, software, information system, web-mapping, database, lightning, WWLLN
Reference:
Suchkova E.A., Nikolaeva Y.V..
Developing the Best Possible Data Storage Structure for Decision Support Systems
// Cybernetics and programming.
2016. № 4.
P. 58-64.
DOI: 10.7256/2306-4196.2016.4.18281 URL: https://en.nbpublish.com/library_read_article.php?id=18281
Abstract:
The article presents the results of the development and experimental comparison of data structures and data storage methods. The basis for building the models included the financial market decision support system and expert evaluations of the electronic tendering system. In both cases the authors built conceptual data models, stored data in text files, relational and non-relational databases and evaluated efficiency of an organized structure from the point of view of efficient storage and access, automatic integrity control and data consistency. By using theoretical methods (abstraction, analysis, synthesis, and idealization) the authors developed conceptual database models. In its turn, by using empirical methods (experiment and comparison) they checked the efficiency of data storage with the use of text files, relational and non-relational databases. As the main conclusion of the research, the authors provide recommendations on how to select the best data storage structures for electronic decision support systems. The experimental comparison allowed to discover that for a developed expert evaluation storage structure the relational database control system is the most effective method while in case of storing information about financial markets, it is better to use text files for a developed decision support system.
Keywords:
information system, development, desicion support, non-relational database, relational database, database, data, structure, expert evaluation, financial market
Reference:
Sokol'nikov A.M..
Comparative analysis of the approaches in development of the database management systems and its’ architecture for highly loaded web-services
// Cybernetics and programming.
2014. № 4.
P. 1-13.
DOI: 10.7256/2306-4196.2014.4.12800 URL: https://en.nbpublish.com/library_read_article.php?id=12800
Abstract:
In today’s world the problem of processing and storing huge amounts of data becomes increasingly pressing. Messages in social networks, photos, streaming video – altogether creates a high load on the server-side software. For this reason common approaches used in desktop-software design may be ineffective since they don’t take into account the huge load on the application created by the vast number of users. Currently, there is no clear definition for highly-loaded systems. In most cases this term is used in situations, when software fails to operate under some momentary load. There’s no specific values set at which a system can be considered highly-loaded, since each software is different and same amount of requests can lead to completely different loads on the resources. The given study of the database management systems consisted of several experiments, measuring the speed of common operation on databases, such as adding, selecting and deleting. Based on the result of these experiments the author makes conclusions and gives recommendations on choosing the database management system. The article reviews approaches in developing highly loaded systems, highlights their features and disadvantages and shows examples of the use of these approaches in popular web-services such as ВКонтакте, Facebook, Google and Яндекс. The articles brings a comparative analysis of MySQL and MongoDB database management systesms. In conclusion the author gives recommendations on selecting a database management system depending on the approach to designing architecture of a highly-loaded project.
Keywords:
highly loaded software systems, application architecture, data storage, database, MySQL, MongoDB, DBMS, software, large amounts of data, relational databases
Reference:
Melikov A.V..
Multidimensional data organization in surveys information systems
// Cybernetics and programming.
2014. № 1.
P. 1-16.
DOI: 10.7256/2306-4196.2014.1.10331 URL: https://en.nbpublish.com/library_read_article.php?id=10331
Abstract:
The author analyzes the advantages of multi-dimensional logic data organization in survey information systems. Author proposes a conceptual model of the process of questioning in the information system, characterized by the presence of processing the results of expert evaluation during their multivariate analysis to decision-making. The author developed a mathematical model of data transformation from the source to relational database storage. Such a representation of multidimensional data model provides a reliable and compact storage in the complex information structures and the ability to highlight important information in the data processing, all of which increases the efficiency of processing expert information, and facilitates the design based on it adaptive, integrable and dynamic information system survey. To achieve the objectives the author uses the theory of information processes and systems, database theory, set theory, graph theory. The author developed a new structure of the data warehouse, based on the algebra of tuples, which increases the reliability and informativeness of the conclusions derived from the data of expert interviews, excluding processing semantically equivalent information and reducing the number of empty values presented in tables hypercubes. The proposed structure of the data warehouse enables to analyze unexpected data not covered by the survey plan, thereby increasing saturation resulting from the processing of expert information terminals.
Keywords:
information system survey, database theory, multidimensional data model, data storage, set theory, algebra of tuples, graph theory, hypercube, measurement, attribute
Reference:
Luchinin Z.S..
Method of referential integrity of the document-oriented database
// Cybernetics and programming.
2014. № 1.
P. 17-22.
DOI: 10.7256/2306-4196.2014.1.11081 URL: https://en.nbpublish.com/library_read_article.php?id=11081
Abstract:
The subject of this study is a document-oriented database that can store semistructured information related to non-relational data storage approach. The basis of the study is in referential integrity. An explanation of the concept of referential integrity is given not only for relational databases. Based on the analysis of the relational approach to support referential integrity, the author proposes the task of handling large amounts of data in a distributed environment using the technology of document-oriented databases taking into account the structure of the data. The article proposes a method to maintain the referential integrity of distributed document-oriented database, based on estimates of links between documents. The method involves the introduction of two types of links between documents, namely the strong and weak links. These types of links mimic the strategy for dealing with foreign keys in relational databases. The author substantiates the introduction of this method as a separate unit from the database management system.
Keywords:
database integrity, referential integrity, relational databases, foreign key, document-oriented model, semistructured data, big data, replication, partitioning, foreing key
Reference:
Milushkov V.I., Gatchin Y.A..
Using a binary search to optimize the query to retrieve data
// Cybernetics and programming.
2012. № 2.
P. 1-9.
DOI: 10.7256/2306-4196.2012.2.13867 URL: https://en.nbpublish.com/library_read_article.php?id=13867
Abstract:
With the increasing popularity of DBMS its use inevitably begins to demand more and more resources. The first time is possible (and, of course, necessary) to lower the load through optimization of algorithms and / or architecture of the application. However, what if anything that can be optimized is already optimized, and the application still cannot cope with the load? In this article the methods and ways to use binary search to optimize the query to retrieve data are reviewed. Authors giv an overview of php + MySQL and solved the problem of the transfering the queue from fields without indexes to tables with primary keys, which significantly speeds up the query and the database itself. Proposed solution greatly accelerates the search for the desired item by reducing the search range but at the same time sacrificing some accuracy computations. For statistical reasons it is not critical if a few elements of millions will not be taken into account. Otherwise, it is necessary to make and complete epsilon zero search only after reaching the last level of the tree.
Keywords:
bisection method, scaling, search range, query optimization, binary search, data structure, index, primary key, database, DBMS
Reference:
Ivanov A.Y., Sinitsyn A.P., Nesvyazin I.A..
Pathfinding System for 3d Space
// Cybernetics and programming.
2012. № 1.
P. 6-11.
DOI: 10.25136/2306-4196.2012.1.6 URL: https://en.nbpublish.com/library_read_article.php?id=6
Abstract:
The article is devoted to the extension of the navigation graph (NG) method for 3D space pathfinding systems using numerous NGs relevant to each object instead of a single graph. This method significantly reduces the volume of manual work for setting NGs as well as the general time of algorythm without distoring the adequacy of the path being found.
Keywords:
3D space, navigation, graph, pathfinding
Reference:
Malashkevich V.B., Malashkevich I.A..
Efficient data structure
// Cybernetics and programming.
2012. № 1.
P. 1-6.
DOI: 10.7256/2306-4196.2012.1.13863 URL: https://en.nbpublish.com/library_read_article.php?id=13863
Abstract:
The efficiency of information retrieval systems depends significantly on the structure of the data. The selected data structure determines the speed of data operations (search, insert, delete), and the necessary cost of memory. Due to the importance of the problem of optimizing the structure of data in modern scientific and technical literature are well represented implement a variety of data structures and analysis of their effectiveness. A wide range of known effective data structures uses the properties of linear arrays of data, and binary trees. The article deals with one of the special data structure known as a digital trie (Trie unlike Tree). Search speed in the proposed structure is the statistical value and the worst value is characterized by O (log (N / 2)) and the average value of O (log (N / 2) / 2) operations. It also has the best memory cost in comparison with the traditional characteristics of a digital tree. Thus the aturhos propose and implemented an efficient data structure - "vertical" digital tree, which is characterized by high-speed data retrieval and low memory consumption.
Keywords:
key, feathered trees, red-black trees, array of pointers, digital tree, tree structure, data structure, node, memory costs, search