Reference:
Lobanov A.A..
Conceptual optical schemes of direction finding computational devices for guiding the space probe to a landing point on small bodies of the solar system
// Cybernetics and programming.
2019. № 1.
P. 83-89.
DOI: 10.25136/2644-5522.2019.1.28720 URL: https://en.nbpublish.com/library_read_article.php?id=28720
Abstract:
The subject of the research is an optical direction finder for the navigation and guiding a space probe. The usage of the optical correlation computing device is determined by the necessity to reduce the load to a space probe on-board computer system. The paper describes especially the optical correlation computing devices for building the correlation-extremal direction finders. The principle optical schemes of this class of optical devices are described. The method of mathematical modeling is used for building and analyzing the optical scheme of the proposed optical correlation computing devices. Both advantages and disadvantages of the existed optical schemes are noted. The requirements to parameters of the special on-board optical correlation computing device for the direction finding are proposed. It is shown that the optical device potentially can be used for navigation and guiding a space probe in the process of the landing on a small body of the solar system.
Keywords:
spatial information, optical computer, autonomous guidance, on-board computing system, image recognition, optical processing, optical scheme, direction finding, autonomous navigation, spatial data
Reference:
Lobanov A.A., Filonov A.S..
The Method of Optical Processing of Spatial Information for the Purpose of Guidance and Landing Space Probes on Solar System Small Bodies
// Cybernetics and programming.
2018. № 2.
P. 94-102.
DOI: 10.25136/2644-5522.2018.2.25971 URL: https://en.nbpublish.com/library_read_article.php?id=25971
Abstract:
In this article the authors focus on the issues that relate to using the promising technique of optical correlation for onboard guidance and landing a space probe on a surface of a solar system small body. The authors use the ability of optical systems to perform the Fourier transformation which allows to construct optical-electronic devices of high-speed and high-efficiency computing systems. Distribution of illumination (energy) inplane or through the lens of the entrance pupil is an input data flow for an optical computing system that has been constructed on the basis of the aforesaid principle. In a general way, this information will function as a reference standard tranmission according to two (x, y) coordinates. To analyze the aforesaid aspects, the authors of the article have used mathematical analysis methods, in particular, integral calculus for creating a mathematical model of an optical correlation bearing finder with a revolving the reference standard. The authors suggest to modulate an optical signal going through a transparent reference standard both upon the amplitude and phase. The advantage of the method is a superspeed of information processing that is restricted by the light velocity and happens almost at instance taking into account small sizes of devices. The authors give a theoretical justification of the efficiency of the optical polar correlation method. The authors demonstrate that in order to neutralize the reference standard when guiding and landing a space probe it is better to use a revolving of a reference image. The authors also analyze the method of revolving of a reference image to find the function extremum of the correlation signal and neutralizing the reference standard.
Keywords:
neutralizing the reference standard, optical computer, computer vision, computational complexes, pattern recognition, small bodies, landing a space probe, guidance a space probe, spatial information, optical processing
Reference:
Belikova M.Y., Karanina S.Y., Glebova A.V..
Experimental comparison of clustering algorithms in the problem of lightning data grouping
// Cybernetics and programming.
2018. № 1.
P. 15-26.
DOI: 10.25136/2644-5522.2018.1.25261 URL: https://en.nbpublish.com/library_read_article.php?id=25261
Abstract:
The authors present the results of an experimental comparison of the cluster analysis of thunderstorm data using the algorithms of k-means, dbscan and hierarchical agglomerative algorithms, where closest neighbor, full and medium coupling methods and the Ward method are used to calculate the intercluster distance. The influence of the normalization parameters on the number of clusters determined by the algorithms under consideration on the test sample is estimated. Data on the time of registration and the coordinates of lightning discharges recorded by the World Wide Lightning Location Network (WWLLN) were used for test purposes. The construction of grouping solutions by the chosen clustering algorithms was carried out with the help of the Nbclust, dbscan, and fpc cluster analysis packages developed in the R language. The article showns that the choice of the values of the normalization parameters has a significant effect on the number of clusters allocated from the sample under consideration using hierarchical clustering algorithms (especially for method of the nearest neighbor). The choice of the normalizing parameters has practically no effect or has a negligible effect on the results of lightning cluster clustering using the k-means and dbscan algorithms. The best agreement with expert judgment was obtained for the dbscan algorithm with normalizing parameters corresponding to linear dimensions of a thunderstorm convective cell of 100 km and a period of time of 30 minutes to an hour.
Keywords:
cluster validity, hierarchical algorithms, dbscan, k-means, clustering algorithms, data mining, average silhouette width (asw), data normalization, lightning, WWLLN
Reference:
Krevetsky A.V., Urzhumov D.V..
Identification of the chain structure images within the groups of point objects via correlation of code elements of their contours.
// Cybernetics and programming.
2017. № 6.
P. 19-27.
DOI: 10.25136/2644-5522.2017.6.25091 URL: https://en.nbpublish.com/library_read_article.php?id=25091
Abstract:
Recognizing the image shapes of the groups of point and / or small objects (TRP) is a non-trivial task due to the incoherence and degeneracy of their elements. The task becomes furthermore complicated for TRPs with non-stationary configuration, such as "chains" and "congestions". The differentiation of these types of images holds an independent value, and it can also be used in order to branch out the algorithm for more detailed recognition of the TRP. In order to synthesize effective chain and clusters differentiators, it is important to determine the principle for describing the TRP form, as well as discriminatory characteristics, to determine the statistics and characteristics of decision-making in the presence of disrupting factors. The solution of this problem is achieved by the methods found within the theory of processing digital images and signals, the theory of contour analysis for the synthesis of algorithms used in order to describе and analyzе the shape of images, methods of probability theory and mathematical statistics for the synthesis of decision-making methods. The procedure for constructing a minimal spanning tree is used in order to link isolated TRP elements to a single object. Its shape is described by a chain complex-valued code which is its contour. The dependence between the width of the energy spectrum of such a contour or the value of the correlation interval of its readings and the degree of complexity of the form allowed the authors to choose the distinction between chains and clusters of the characteristic of the autocorrelation function (ACF) of the contour as a discriminating feature . The width of the ACF (correlation interval) and the correlation of neighboring elements of the contours of the TRP are studied as such characteristics. The corresponding algorithms for distinguishing TRTs of these classes as chain identifiers are synthesized. The characteristics of decision algorithms for various observation conditions are found. A comparative analysis of their effectiveness and applicability limits is also performed.
Keywords:
chain recognition, group point object, point group configuration, contour shape, contour complexity, contour autocorrelation function, minimal tree shape, contour correlation interval, correlation of contour samples, cluster identification
Reference:
Moshkova T.V., Romenskii S.A., Rotkov S.I., Tyurina V.A..
Algorithmic and software implementation of the electronic designer of the graph of the 3D object assembly
// Cybernetics and programming.
2017. № 5.
P. 1-13.
DOI: 10.25136/2644-5522.2017.5.24342 URL: https://en.nbpublish.com/library_read_article.php?id=24342
Abstract:
The subject of this article is the application of the apparatus of set-theoretic operations for the formation of the assembly graph of the construction of the 3D object. The article also considers organization of algorithmic and program electronic constructor of the graph of the 3D object assembly system as one of the ways of transforming archives of design documentation from paper representation to the electronic model of the object. Automation of this process is of great scientific interest and is in the process of developing some scientific geometric schools including the Nizhny Novgorod school. In the research the authors used theoretical and empirical methods: analytical generalization and systematization of information on literary and other sources, the formulation of the problem, the identification and resolution of contradictions, the use of the heuristic approach in constructing working hypotheses, the application of methods of geometric modeling. Automation of data conversion process from 2D to 3D, drawn images into the electronic product model known as the "solution of the inverse problem of descriptive geometry" has long attracted researchers but has not yet been introduced in any of the known geometric modeling systems. The study and analysis of the user's actions, when allocates elementary bodies in the drawing allows to develop the perfect algorithm of "automatic reading of the drawing" with maximum consideration of the heuristic component of this process. The program complex developed by the authors of the article is one of the preparatory stages for conversion of archives of drawn design documentation of the products from paper into electronic form and transformation of a 2D drawing into an electronic three-dimensional model of an object. It also allows conducting practical research in a poorly studied area of human perception of drafting documentation which is necessary for the preparation of software developers for CAD systems.
Keywords:
Continuous Acquisition and Lifecycle Support, electronic model of product, education, 3D geometric modeling, archives, drawings, boolean operations, graph assembly, Building Information Modeling, computer-aided design
Reference:
Urzhumov D.V., Krevetsky A.V..
The Architecture of the 3D Scene Generator with Point and Small-Sized Object Groups
// Cybernetics and programming.
2016. № 6.
P. 20-29.
DOI: 10.7256/2306-4196.2016.6.21007 URL: https://en.nbpublish.com/library_read_article.php?id=21007
Abstract:
The subject of the research is the architecture of the 3D scene generator containing point and small-sized object groups with coordinate and impulse noises. The authors examine methods of constructing software class hierarchies based on the set requirements for construction algorithm integration and noise contamination. The authors analyze the process of constructing a universal interface for scene algorithm construction which does not have the feature of being excessive in relation to container classes for parameters and repeated implementation of identical algorithms for different types of data while reserving a possibility to vary input parameters of the generation method. The universal nature of the interface for sample generation procedures and integration with noise contamination procedures and data integrity check are ensured through organizing weakly connected hierarchy based on generalized functors using a type list. The authors define the main classes of abstractions necessary for modeling the main types of objects that have parameter specifications of observing conditions for an opportunity to analyze the accuracy of further recognition. The generator has a particular feature to support point primitives and their groups, stochastic models of group objects and distortions, extensibility of object model types and noises and possibility of being integrated with user's programs in order to analyze efficiency of recognition methods.
Keywords:
class hierarchies, software package architecture, scene analysis, small-sized object groups, group point objects, image recognition, pattern recognition, 3D scene modeling, model visualization, recognition efficiency analysis
Reference:
Krevetsky A.V., Chesnokov S.E..
Recognition of Partially Masked Group Point Objects by Most Similar Local Description of Their Form
// Cybernetics and programming.
2016. № 6.
P. 30-37.
DOI: 10.7256/2306-4196.2016.6.21445 URL: https://en.nbpublish.com/library_read_article.php?id=21445
Abstract:
Group point objects (GPO) are multitudes of isolated background-contrasting dots united by one common feature. Many apps use a method of mutual arrangement of group point objects. Implementation of well-known methods for recognizing GPOs gets difficult when an observer has only part of GPOs constituting one of famous classes within his or her sight. Possible deviations of point objects from their standard positions additionally complicate the task to recognize partially marked GPOs. In their research the authors perform recognition of GPOs based on most similar local description of configuration with adjacent elements of GPOs. Cylindrical sections of the abstract vector field with sources in GPO elements and restricted scale of long-range interaction are used as local descriptions. Local descriptions of GPO configuration are viewed as discrete complex-valued codes. The module and argument of each reference correspond to the strength and direction of the vector field action. Similarity of such description of forms on the basis of the dot product module ensures invariance to GPO observation angle and does not depend on GPO shift in picture. Recognition features prove efficiency of the reviewed method for recognizing partially masked GPOs in a practically significant scope of random fluctuations in GPO element coordinates.
Keywords:
vector field, analysis of point scenes, spatial compactness, recognition of group objects, point field, associate solid image, cylindrical section of the field, group point object, invariance to rotation, complex coding
Reference:
Rodzin S.I., El'-Khatib S.A..
Optimization of parameters of bio-inspired hyper heuristics in the problem of image segmentation
// Cybernetics and programming.
2016. № 5.
P. 228-242.
DOI: 10.7256/2306-4196.2016.5.18507 URL: https://en.nbpublish.com/library_read_article.php?id=18507
Abstract:
The subject of study is a new segmentation algorithm that allows improving the quality and speed of image processing in comparison with known algorithms. The authors consider the problem of segmentation of medical images and existing approaches to its solution. It is noted that segmentation is the most difficult part in the processing and analysis of medical images of biological tissue, since it is necessary select areas that correspond to different objects or structures on histological specimens: cells, organelles and artifacts. Particular attention is paid to algorithms of particle swarms and k-means. In solving the problem, authors use swarm intelligence methodology, cluster analysis, the theory of evolutionary computation, mathematical statistics, computer modeling and programming. The article suggests a new hyper-heuristic algorithm and its modification to solve the problem of segmentation of medical images in order to improve image quality and processing speed. Authors present experimental results obtained on the basis of test data from a known set of medical MRI images using the software developed by the authors. The optimal values of coefficients that determine the behavior and efficiency hyper heuristics that reduces the number of iterations of the algorithm are defined. The results demonstrate the advantage and confirm the efficiency of hyper heuristics algorithms in systems of digital medical imaging solutions to the problem of segmentation of medical images.
Keywords:
cluster, pixel, k-means algorithm, particle swarm algorithm, hyper heuristics, segmentation images, optimization, distance, experiment, modeling
Reference:
Bagutdinov R..
The task of optical flow simulation based on the dynamics of particle motion
// Cybernetics and programming.
2016. № 5.
P. 10-15.
DOI: 10.7256/2306-4196.2016.5.18981 URL: https://en.nbpublish.com/library_read_article.php?id=18981
Abstract:
In modern robotics a problem of development of systems, algorithms and methods of spatial orientation and navigation of robots remains one of the most urgent tasks. The article suggests an algorithm for the simulation of optical flow based on the dynamics of particle motion. Unlike conventional methods of calculation, the author focuses on those aspects of the decision problem of determining the optical flow as the use of methods of calculation of the Fourier series, taking into account elements of the laws of hydrodynamics. This allows considering the problem of determining optical flow with a different, newer perspective. Theoretical research methods are based on methods of digital image processing, pattern recognition, digital transformation and system analysis. Optical flow calculations in this problem are reduced to the determination of the displacement of each point of the frame. It allows constructing the velocity field of each particle of light, the envelope of the selected object. The research results are applicable in the field of modernization of management systems, monitoring and processing of the received picture and video to enhance the effectiveness of work performed by providing a more accurate vision. Therefore, the portability and autonomy of work robots is increased, which in turn may affect the economic component complexes and the use of robotic systems.
Keywords:
pattern recognition, computer vision, technical vision, velocity field, the dynamics of particle motion, optical flow, simulation, robotics, vector field, image processing
Reference:
Bondarenko M.A..
Algorithm for combining sensory and synthesized video information in the aviation system of combined vision
// Cybernetics and programming.
2016. № 1.
P. 236-257.
DOI: 10.7256/2306-4196.2016.1.17770 URL: https://en.nbpublish.com/library_read_article.php?id=17770
Abstract:
The subjects of the research are techniques for combining sensory and synthesized video information in appliance to the aviation system of combined vision. The use of such systems allows controlling manned and unmanned aerial vehicles under conditions of low visibility by combining video information from on-board camera with video data synthesized by a priori given virtual model of a terrain. It is known that on-board navigation system measuring the position and orientation of the aircraft has accuracy errors because of which the angle of the synthesized image on a virtual model of the area does not match the foreshortening of shooting onboard cameras. This is why a procedure for combining sensory and synthesized video information in the aviation system of combined vision is needed. The study was conducted with the use of mathematical and computer modeling of combined vision systems using both synthesized and real images of an underlying terrain. The novelty of these results lies in the universality of the developed algorithm. This algorithm allows combining video content with an arbitrary data along with the possibility of its practical implementation and high quality combining. Developed algorithm for combining sensory and synthesized video information on the basis of typological binding and Kalman filtering provides a sufficiently precise and reliable combining that meets the minimum requirements for the aircraft systems combined vision. The algorithm is universal, has low demands to the scene recorded images, has low computational complexity and can be implemented in hardware and software based on modern avionics. When testing the algorithm the authors used precision characteristics at the level of consumer navigation devices which significantly inferior in accuracy compared to modern air navigation systems. This indicates the possibility of using the algorithm in inexpensive and compact user systems, as well as in mobile robots.
Keywords:
algorithms, vision systems, a virtual reality, combining images, virtual model of the terrain, man-machine interfaces, aviation systems, airborne avionics, combined vision systems, synthesized vision system
Reference:
Aleksanin S.A., Fedosovskii M.E..
Development of an automated procedure of digital image improvement using Laplace mask
// Cybernetics and programming.
2016. № 1.
P. 258-269.
DOI: 10.7256/2306-4196.2016.1.17851 URL: https://en.nbpublish.com/library_read_article.php?id=17851
Abstract:
The study is devoted to automated procedures of selecting method and relevant parameters for digital image improvement. In this article the authors select filtering method known as Laplace mask to improve images. Since the image is represented by a discrete function, the authors use various discrete representations as an approximation of a continuous formula of two-dimensional Laplace operator. Additionally the paper reviews filters (masks) of Laplace high frequency which are frequently used in digital image processing. The research methodology is based on the computational experiments. Software for these experiments is designed using MATLAB system. The novelty of the research is to indicate the direction which will help reduce the time spent on image optimization while increasing the efficiency and reliability of the software for digital image processing. This is confirmed by the results of numerical experiments that were carried out with the use of the developed automated procedures for selecting and setting parameters Laplace masks for digital image processing.
Keywords:
Image processing, Image Enhancement, Image Smoothing, Filtering of images, Laplace mask, CAD, The automated procedure, MATLAB, The selection procedure, Laplace operator
Reference:
Korobeinikov A.G., Fedosovskii M.E., Aleksanin S.A..
Development of an automated procedure for solving the problem of reconstructing blurry digital images
// Cybernetics and programming.
2016. № 1.
P. 270-291.
DOI: 10.7256/2306-4196.2016.1.17867 URL: https://en.nbpublish.com/library_read_article.php?id=17867
Abstract:
The study is devoted to methods allowing solving the problem of reconstructing blurry digital images. The authors give a mathematical formulation of the problem of removing blurring from the image. The article presents the Volterra type I equation integral equation. Based on a set of methods that solve this integral equation, the authors propose an automated procedure for solving the problem of reconstructing blurry digital images. The paper discussed in detail the method of Tikhonov regularization. Numerical experiments for different types of digital images are held. The authors give recommendation for choosing the regularization parameter. The research methodology is based on the methods for solving incorrectly posed problems, such as the task of removing the blurring of the digital image. The novelty of the research lies in the uniform approach to solving the problem of removing the blurring of a digital image. This approach has been applied to various kinds of digital images. The results of the selection of the regularization parameter, obtained using numerical experiments are different for different types of images, as expected.
Keywords:
blur images, convolution, blind deconvolution, eliminate blurring, image enhancement, image processing, picture, discrepancy, Tikhonov regularization method, The automated procedure
Reference:
Aleksanin S.A..
Development of procedures of the automated choice of methods of the analysis and digital processing of images at the solution of problems of defectoscopy
// Cybernetics and programming.
2015. № 4.
P. 62-71.
DOI: 10.7256/2306-4196.2015.4.16331 URL: https://en.nbpublish.com/library_read_article.php?id=16331
Abstract:
In the article the author presents developed procedures of automated automated selection of methods of analysis and digital processing of images, used when creating domain-specific subsystems, in particular for defectoscopy. The relevance of the problem is caused by ever-increasing requirements for technical characteristics of modern equipment used in medicine, space exploration, information technology, etc. And this implies the urgency of the task of creating functions of automated selection of methods for digital image processing and analysis in defectoscopy. The described automated procedures for choosing methods for digital image processing and analysis were developed on the basis of modern methods of digital image processing. The main results of the present research of procedures for automated selection of digital image processing solutions for the problems of defectoscopy is that the use of this procedures, depending on qualification, can significantly improve the quality of digital photos. This will lead to better identification of defects, which in turn will allow to withstand all specified technical requirements for manufactured products.
Keywords:
morphological filtering, image processing, defectoscopy, Wiener filtration, Laplace's mask, convolution, blur image, edge detecion, smoothed image, deblurring
Reference:
Ipatov Y.A., Totskii A.A..
The study of images of dynamically changing scenes in the colorimetric space
// Cybernetics and programming.
2015. № 4.
P. 36-48.
DOI: 10.7256/2306-4196.2015.4.16158 URL: https://en.nbpublish.com/library_read_article.php?id=16158
Abstract:
The object of research is the image of a dynamically changing scene of artificial origin on the complex and statistically inhomogeneous background. The subject of research is the transformation methods and standard approaches of representation of color digital images in three dimensions. The study focuses on almost all basic colorimetric spaces used in building a clusters of object / background. Formation of the samples is carried out by the method of supervised learning. Calculation of objective indicators and comparison of subjective characteristics allows to determine the optimal color space for subsequent synthesis of algorithm for effective segmentation of this class of images. When solving the task authors used image processing techniques, probability theory, mathematical logic, mathematical statistics, the unit of mathematical analysis, linear algebra, mathematical modeling methods, theory of algorithms and methods of object-oriented programming. The novelty of the study is in determination of the optimal color space separation of clusters object / background for images of a given class. Visual characteristics of considered methods of representation of colorimetric spaces confirmed objective indicators of calculus. The main conclusions of the study is that RGB color space is the best choice for color segmentation algorithm synthesized as the representation of objects and the background form a weakly overlapping clusters.
Keywords:
dynamically changing scene, color space, color images, clustering, colorimetric model, convex hull, device-dependent color models, spot description of the scene, point description methods, recognition by points
Reference:
Korobeinikov A.G., Aleksanin S.A..
Methods of automated image processing in solving problems of magnetic defectoscopy
// Cybernetics and programming.
2015. № 4.
P. 49-61.
DOI: 10.7256/2306-4196.2015.4.16320 URL: https://en.nbpublish.com/library_read_article.php?id=16320
Abstract:
The subject of study in this paper is developed automated method of selecting of procedures of processing images gathered for the magnetic defectoscopy. The methods based on the analysis of magnetic fields scattering near the defects after the magnetization of these products are used to detect various defects, such as cracks, in the surface layers of steel parts. In areas where there is a discontinuity, the change of the magnetic flux is present. This effect is the basis of almost all existing methods of magnetic defectoscopy. One of the most known methods of magnetic defectoscopy of method is a magnetic powder: the surface of the magnetized part is covered with magnetic powder (dry method) or magnetic slurry (wet method). When using fluorescent powders and suspensions, the images of the studied details show visible defects significantly better. Therefore, it is possible to automate the processing of images. The paper presents an automated procedure for selecting methods of image processing. The authors give an example of processing image of steel parts for detecting defects using the luminous lines that appeared after applying the magnetic slurry. The study uses the methods of the theory of image processing. These are mainly extraction methods for defining boundaries of objects and morphological image processing. The main result of an automated method is the opportunity to obtain expert information on the basis of which it is possible to make a conclusion about the presence of defects in the test product. In the example given in the article authors show that the lines are continuous and have no sharp change of direction. Therefore, the conclusion about the absence of discontinuities (defects) in the product is made. In addition, authors point out that the binary image can be inverted at the request of the researcher.
Keywords:
smoothed image, contrast adjustment, edge detection, morphological filtering, image enhancement, image processing, magnetic particle inspection, deblurring, blind deconvolution, convolution
Reference:
Ipatov Y.A., Krevetsky A.V..
Methods of detection and spatial localization of groups of point objects
// Cybernetics and programming.
2014. № 6.
P. 17-25.
DOI: 10.7256/2306-4196.2014.6.13642 URL: https://en.nbpublish.com/library_read_article.php?id=13642
Abstract:
Modern systems of computer vision use intelligent algorithms that solve a wide class of problems from simple text recognition to complex systems of spatial orientation. One of the main problems for developers of such systems is in selection of unique attributes which remain invariant to various kinds of transformations. The article presents a comparative analysis of methods of detection and spatial localization of groups of point objects. The reviewed methods are compared by the performance and efficiency at specified dimensions. As of today there are no universal approaches to determine of such attributes, and its’ selection depends on the context of the problem being solved and on the registered conditions of observation. Various kinds of descriptors such as points, lines, angles and geometric primitives can be selected as dominating attributes. The authors study algorithms for detection of groups of point objects based on the minimum spanning tree (MST) and using a model of associated continuous image (ACI).
Keywords:
computer vision, object recognition, speed of the algorithm, modeling of point objects, associated continuous image, minimum spanning tree, group of point objects, point objects, intelligent algorithms, localization of groups
Reference:
Magomedov A.M..
Viewing a map with zoom and navigation elements
// Cybernetics and programming.
2013. № 5.
P. 37-41.
DOI: 10.7256/2306-4196.2013.5.9696 URL: https://en.nbpublish.com/library_read_article.php?id=9696
Abstract:
This article gives a schematic view of a map of the region, customizable for various scale images selected from the list of maps of the region. The article discusses two problems: the map zooming and computation of shortest paths. The author considers the following problems: viewing of any raster image maps with minimal changes to existing software and finding the shortest path between two inhabited localities specified interactively. To solve the first problem the coordinates are recalculated ("zoom"). To solve the second problem the article considers "preparatory graph" with the vertices of two types: temporary - in sequential points along each selected highway, and permanent - in the points of intersection of highways. Edges of the graph are formed by the segments that connect adjacent points along each highway. At the end of one of the known algorithms for finding the shortest path in the graph is used. Stored arrays of temporary intermediate vertices are used for visualization of the found the shortest path.
Keywords:
map of the region, scaling, shortcut, coordinates, canonical map, preparatory graph, Dijkstra's algorithm, temporary vertices, permanent vertices, graph of roads
Reference:
Mezhenin A.V., Izvozchikova V.V..
Methods of the normal vectors construction in tasks of objects identification
// Cybernetics and programming.
2013. № 4.
P. 51-58.
DOI: 10.7256/2306-4196.2013.4.9358 URL: https://en.nbpublish.com/library_read_article.php?id=9358
Abstract:
The article deals with methods of calculating of normal vectors in tasks of similarity analysis of the polygon models of arbitrary topological type. Such studies can be used in evaluation of the quality of simplification of a polygonal net and the accuracy of the reconstruction of the three-dimensional models in tasks of photogrammetry. To determine the similarity of the three-dimensional (polygonal) objects author proposes to use the approaches of the general topology, the Hausdorff dimension. The author points out, that the most important step in the calculation of the discussed metric if in building the normal vectors to the surface. For the evaluation of the given methods and clear visualization of the normal vectors, the author developed m-functions in the MATLAB Image Processing Toolbox (IPT) environment. This study confirms the correctness of the chosen direction of the for the analysis of the similarity of polygonal models of arbitrary topological type. The proposed approach can be applied to the tasks of evaluating the quality of algorithms for recognition and reconstruction of 3D models and to the problem of evaluation of the quality of simplified polygonal models.
Keywords:
design, normal vector, identification of objects, algorithm, sensor, polygon model, topological space, polygon mesh, computer graphics, calculation
Reference:
Kharitonov A.V..
Overview of biometric identification methods
// Cybernetics and programming.
2013. № 2.
P. 12-19.
DOI: 10.7256/2306-4196.2013.2.8300 URL: https://en.nbpublish.com/library_read_article.php?id=8300
Abstract:
The article lists the main biometric parameters. The author reviews methods of identification that are used widely in Russia. Biometric identification helps to solve the problem of unification of all existing user passwords to one and apply it across the board. The process of extracting fingerprint features begins with an assessment of image quality is calculated orientation grooves which each pixel represents the direction of the grooves. Face Detection is the most acceptable method of biometric identification in society. Identification of the iris consists of image acquisition with localization of an iris and then forming a code of the iris. As the two main characteristics of any biometric system it is possible to use Type I and Type II errors. Identification based on the iris pattern of the eye is one of the most reliable biometric methods. Contactless method of obtaining data in this case suggests simplicity of use of this method in various areas.
Keywords:
biometric identification, iris, face recognition, fingerprints, personal identification, biometrics, algorithm, database, biometric methods, password