Translate this page:
Please select your language to translate the article


You can just close the window to don't translate
Library
Your profile

Back to contents

Software systems and computational methods
Reference:

On clarifying the quality control of software products

Tikhanychev Oleg Vasilyevich

ORCID: 0000-0003-4759-2931

PhD in Technical Science

Deputy Head of Department in the Office of Advanced Development, Technoserv Group 

111395, Russia, Moscow, Yunosti str., 13

tow65@yandex.ru
Other publications by this author
 

 

DOI:

10.7256/2454-0714.2022.2.37985

EDN:

ZXYEKP

Received:

03-05-2022


Published:

05-07-2022


Abstract: Despite the extensive volume of experience in the field of control automation, there are quite a lot of problems in the process of developing automated systems, including those related to the development of application software for them. With this in mind, the process of software development of automated control systems is chosen as the subject of research. The object of the study is a model of quality control of this process. Currently, legal regulation of software quality control is based on a paradigm that determines that the quality of programs will be checked exclusively for compliance with the requirements of the terms of contract. But, as practice has shown, such a paradigm does not fully meet modern conditions, providing not full-fledged quality control -- the verification of compliance of programs with customer expectations formulated at the stage of system design is needed. To find ways to solve the problem, the article uses general scientific methods of analysis. Based on the analysis of currently used methods and models of software testing, proposals for clarifying the paradigm of its evaluation and control are synthesized. The article formulates a scientific and practical problem and suggests a possible approach to its solution based on the refinement of the quality assessment paradigm currently used, on the transition from a "rigid", preset model to an expanded quality assessment model that takes into account not only the requirements of the terms of the contract, but also the conditions for their implementation. The novelty of the proposed approach lies in the fact that the solution of the formulated task will provide an overall improvement in the quality of control by improving the safety and effectiveness of programs based on the transition to the use of an extended dynamic testing model of the software being developed, implemented within the framework of a refined quality assessment paradigm


Keywords:

automated control system, decision support, software, software quality, program quality assessment, the quality assessment paradigm, quality assessment model, quality management methodology, the principle of quality assessment, testing programs

This article is automatically translated.

Introduction

The most important issues that arise in the development of application programs include quality control, which ensures the effectiveness of decisions and the safety of the functioning of systems that are controlled using software. The need for quality control of the developed programs is determined by the objective factor of the occurrence of software errors, which is due to the following reasons:

· insufficient completeness of the description of the customer's requirements described in the terms of reference (TOR) for the work and the documents detailing it;

· errors in the interpretation of the requirements of the TOR and other documents by analysts and algorithmists;

· system, technical and logical errors that occur during the development of program code, database structures and queries to them.

All these factors, while the software was used mainly in highly specialized systems, were not critical – errors, as a rule, were detected and eliminated both at the testing stage and during operation, without leading to significant problems [1,2,3].

But, in recent years, the relevance of the quality control problem has increased significantly: errors are becoming more frequent, and their consequences are becoming more noticeable. This is determined by the fact that a variety of software is increasingly being implemented in all areas of activity: from the management of simple household devices within the Internet of Things IoT (Internet of Things), to distributed decision support systems such as ERP (Enterprise Resource Planning) and software management tools for autonomous robotic systems.

Illustrative examples confirming this thesis can serve as incidents related to the incorrect functioning of software as part of complex technical systems. Such as the accident of the Russian Fregat upper stage in December 2017, which occurred due to a software error that was not detected at the testing stage and manifested itself after two decades of operation [4]. Or the software failure of the automated air defense system of the British frigate "Broadsword" (F88 Broadsword) during a battle with Argentine aircraft in May 1982, when a pair of aircraft was perceived as one target, and then recorded again as two separate [5]. There are many similar examples generated by software errors: the failure of the detected Iraqi Scud missile (Scud) on February 25, 1991 by the Patriot complex due to the previously undetected accumulation of an internal timer rounding error, the failure of the electronics of F-22 Raptor fighters at the time of crossing the date change line, the failure the information control system of the American missile cruiser "Yorktown" (USS Yorktown CG-48) in 1997 due to a division by zero error, which caused several aviation disasters, incorrect operation of the MCAS automation system (Maneuvering Characteristics Augmentation System) of Boeing aircraft (Boeing-737 MAX) and others. And these are only the most resonant cases.  And in general, according to the Department of Trade and Industry of the United Kingdom (DTI), when implementing information technologies at enterprises, profit losses due to inadequate software quality can amount to up to 20% of the total losses.

At the same time, it should be noted that all the listed software and hardware systems have passed a full cycle of acceptance tests. The test results in all cases were found to be positive, but a software failure in the process of functioning still occurred.

The analysis shows that the main reasons for the current situation may be:

· disadvantages of testing methods or tools;

· incorrect organization of testing, use of testing methods or tools.

The first reason can be considered unlikely, based on the fact that a sufficiently large number of successful tests of the software being developed have been carried out by existing methods and means implementing them, during which the vast majority of errors have been identified.

The second reason is more likely and may arise either as a result of incorrect setting of the goals and objectives of testing, or with insufficient coverage of the functionality of the software being developed. These factors, in turn, can be generated by the application of a modern quality control paradigm based on the use of an axiomatic predicate that assumes that all documents defining software development are sufficiently complete and absolutely correct. And checking the software for compliance with these documents is the main purpose of quality control of the developed programs. But, as practice shows, this predicate is not always true. This fact indicates the relevance of clarifying approaches to software quality control, primarily in terms of assigning goals and selecting control tasks.

 

1. Currently used methods of quality control of software products

Currently, when organizing software development management (software quality management), it is customary to focus on two main quality management methodologies: TickIT (TickIT plus) and CMMI (Capability Maturity Model Integration).

The first of them functions within the framework of the general quality management system of ISO 9001-00 software development projects. The second methodology, CMMI, provides recommendations for improving development processes. Both of these methodologies do not contradict each other, the standards implementing them are considered by specialists as complementary.

As part of the implementation of these methodologies, in accordance with the regulatory and technical documentation [6,7], the software, before being introduced into the system, undergoes a number of acceptance tests. The scope and content of inspections is determined by the regulatory and technical documentation (NTD), in our country: GOST R 15.301-2016 "System of product development and production. Production and technical products. The procedure for the development and production of products", GOST 34.602-89 "Information technology. A set of standards for automated systems. Technical specification for the creation of an automated system", GOST 34.603-92 "Information technology. Types of tests of automated systems", GOST 19.301-79 "Unified system of software documentation. The test program and methodology. Requirements for the content and design" and canceled in 2019, but not yet replaced by anything RD 50-34.698-90 "Guidelines. Information technology. A set of standards and guidance documents for automated systems. Automated systems. Requirements for the content of documents". In leading foreign countries, standards for the formation of quality models ACCORDING to the ISO 900x series (ISO 9001 "Quality management systems. Requirements", ISO 9002 and ISO 9003), the direct analogues of which are part of domestic GOST, such as GOST R ISO 9001-2015.

To ensure that inspections are carried out within the framework of these standards, various testing tools and methods are used [8,9], the efficiency of which has been repeatedly confirmed by practice. And, theoretically, the existing range of testing tools and methods should fully ensure the quality control of the functioning of programs in any conditions [10,11]. But, as the examples given at the beginning of the article show, this does not happen.

The analysis of the applied methods, means and technologies of testing shows that the purpose of all types of control is a formal verification of compliance with the requirements of the design documentation developed at the stage of formation of initial data on the appearance and functionality of the system, first of all – the terms of reference and statements for the development of tasks [12]. At the same time, in the process of organizing and conducting testing, the quality and completeness of these documents, as a rule, are not questioned, the task of managing indicators and quality assessment criteria is not set either when changing the requirements and expectations of the customer, or when clarifying the operating conditions of the system being developed. As practice has shown, the use of this approach can lead to the appearance of undetected software development errors, in some cases – critical.

 

2. Possible sources of testing errors

Based on the results of the analysis of the applied regulatory and technical documentation and the testing methods implemented within it, it is assumed that one of the main sources of the problem of assessing the quality of the programs being developed is the existing paradigm of its evaluation - the adoption of documents defining software development as axioms and conducting verification exclusively with respect to the requirements of these documents.

The quality assessment paradigm implemented within the framework of such a model is based on the following basic assumptions:

· the customer, already at the initial stage of software development, knows exactly what functionality he expects to receive from the final product;

· before starting development, the customer correctly set the limits of the system's applicability, assumptions and limitations;

· algorithmists and programmers of the developer organization absolutely correctly and without distortion interpret the customer's requirements set out in the terms of reference and statements for the development of tasks.

It is quite logical that in such a statement, the quality of the developed program is considered unambiguously satisfactory if it meets the requirements of the terms of reference. That is, in the existing paradigm, compliance with the requirements of the terms of reference is considered a necessary and sufficient condition for confirming the quality of the developed software products.

But, as practice shows, there are at least three potential sources of development and testing errors that question the effectiveness of the currently used software quality assessment paradigm.

The first is the situation that periodically occurs when the customer does not fully describe the content of automated processes when writing the TOR and during the development of documents "Setting up for program development". This is understandable: the volume of documents being developed and the time to write them are limited, so performers can omit the description of processes and functions, both unintentionally and intentionally – in part of those that they consider obvious and do not require separate mention. As a result, these functions fall out of the process of monitoring compliance with the requirements of the terms of reference. Of course, it is possible to solve such problems organizationally by introducing recommendations on the required detail of the description of the source data. But, we recall, the volume of TK, although not limited to regulatory documents, is not infinite: its increase contributes to improving the quality of development only up to a certain limit, then comes the glut of information.

The second reason is similar to the first, it occurs when the tester, while compiling the verification program, skips, considering them typical, misinterprets or insufficiently describes any customer requirements. The result, as a rule, is the same as in the previous case.

And third, automation of the control system, like any system action, in accordance with the principle of coherence, can give the developed software and technical system new properties that did not exist before and which, accordingly, are not described either in the terms of reference or in the statements for the development of programs. With the currently applied approach to the organization of testing, these properties simply will not be checked.

As an explanation of the reasons for the occurrence and manifestation of possible variants of such problems, we can consider an example of the development of a cargo transportation management program for a motor transport company. More precisely, the process of forming requirements for it and monitoring their implementation at the main stages of the development of an automated enterprise management system (Table 1).

 

Table 1 – Example of requirements formation and control

Type of requirements

Requirements specified by the customer in the terms of reference

Additional requirements formed during detailing during the development of statements for the development of tasks

Requirements required to perform basic and additional functionality

System requirements arising within the framework of the manifestation of a synergistic effect

The need for verification in accordance with the existing paradigm

Subject to mandatory verification in accordance with GOST 19.301-79, GOST 34.603-92 and GOST R 15.301-2016

Subject to verification if documented in accordance with GOST series 34, RD 50-34.698-90

In the existing paradigm, checks are not mandatory, they can be checked within the framework of the FURPS+ methodology

When using an existing quality model, they may not be checked

Content of requirements

Efficiency of cargo delivery

Speed mode optimization

Taking into account the characteristics of the road network

Comprehensive optimization of the distribution of routes by loading and unloading points, taking into account the characteristics of the road network and the characteristics of vehicles

Optimization of traffic routes

Accounting for the structure of the road network

Low cost of transportation

Reduction of delivery routes

Increasing the carrying capacity and capacity of vehicles

Optimization of the model range

Reduced fuel consumption

Cost-effectiveness of the company's operation

Increasing the load factor of vehicles

Conformity of the type of vehicles to the transported goods

Reducing downtime

Optimization of the vehicle maintenance process

 

Table 1 describes an example of the consequences of insufficient detail of the requirements for the program being developed and the results of the lack of consideration of the synergetic effect. In addition to these situations, in practice, a subjective problem also periodically arises, described earlier – when the customer does not mention a number of requirements in the TOR and statements, considering them obvious. As part of the development of the task for optimizing road transport, these were, as practice has shown, the requirements for checking the completeness of refueling vehicles before departure and the availability of trained drivers. As a result, these requirements were not included in the program and methods of inspections, and the need for their implementation was identified only at the trial operation stage, which required the cost of finalizing the already tested program, increased the implementation time. That is, quality control methods based on the existing paradigm do not provide a full-fledged verification of the software being developed.

 

3. The proposed approach to solving the problem

The most obvious approach to solving the formulated problem is to clarify the applied paradigm for evaluating the quality of software products. Taking into account the structure of the quality models used within this paradigm and the principles of their implementation, the fulfillment of this task can be provided mainly by organizational measures. The list of these measures and the methodology of their implementation is determined by the content of the proposed quality control paradigm and its differences from the currently used one.

Firstly, it is necessary to formally clarify the paradigm itself, replacing in the regulatory and technical documentation the declared approach of organizing quality control from the existing "rigid", aimed solely at verifying the requirements of the TOR, to a "flexible" one, allowing to supplement and expand the system of verifiable indicators and criteria for their feasibility during the development of programs under a simplified procedure, without passing a full cycle of re-approval of requirements, as described in the existing NTD.

Secondly, as part of the implementation of the refined paradigm, it is proposed to change the scope of quality control, expanding it from formal verification of TK requirements to additional analysis of the possibility of their implementation. That is, it is proposed to check both the requirements themselves and the necessary conditions for their implementation, which are not explicitly described in the TOR. Namely, to check both the consequences of the synergetic effect and the "derived" indicators that are the product of the transformation of high-level requirements into properties of greater detail. The first can be determined on the basis of additional studies of the system connections of the software being developed. The second requirement is logically implemented according to the principles described in the well-known specification of requirements for FURPS+ software (Functionality, Usability, Reliability, Performance, Supportability plus additional factors) [13].

Thirdly, taking into account the previous paragraph, in order to implement a "flexible" quality assessment paradigm, it is likely that it will be necessary to expand the spectrum and clarify the ratio of the testing methods used: from checking the formal features of individual modules and programs described in the terms of reference (formal testing of individual functions) to system testing. To implement this approach, it will be necessary to involve analysts, and, in some cases, domain experts (end users) in conducting inspections. Moreover, the involvement of the latter is not at the stage of trial operation, as it is currently regulated, but at all stages of preliminary and acceptance tests. The possible increase in the volume of checks that occurs with this approach is not critical, since it will reduce the amount of subsequent improvements, including by preventing the accumulation of errors during development.

And if the leading foreign software developers are at least partially trying to implement an extended quality control model, managing requirements within agile technologies or supplementing them using approaches similar to FURPS+, then the domestic NTD relies exclusively on a "rigid" quality model, focusing solely on checking the requirements of the terms of reference. To remedy the situation, it is necessary to develop measures based both on the use of foreign experience and on our own developments based on the theory of systems.

The comprehensive implementation of the proposed measures will allow the formation and application of an "extended" software quality control paradigm (Figure 1), which includes:

· control over the fulfillment of the requirements of the terms of reference, as it is currently being implemented;

· identification of the presence and analysis of the feasibility control of additional conditions necessary to meet the requirements of the terms of reference;

· identification and verification of the safety of manifestations of the synergetic effect arising during the development of the system requirements of the TOR.

The introduction of a refined paradigm for the organization of quality control potentially provides a solution to an important scientific and practical problem, and promises to fundamentally change the effectiveness of testing due to a significant increase in the completeness of checks and increase the reliability of the results. In particular, because within the framework of the new paradigm, it is proposed not just to adjust the purpose of testing: the refined paradigm implies an expansion of the range of checks that will be carried out on the full range of functionality of the software being developed, not limited to the area in which the requirements of the TOR are implemented.

 
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure 1. Graphical interpretation of the refinement of the software quality control paradigm

 

The introduction of a refined paradigm will require the preliminary implementation of a number of measures:

· conducting R&D on the development of methods for the decomposition of customer requirements into private ones necessary to perform the main tasks;

· development of a methodology for identifying system requirements and analyzing their impact on the functionality and security of the software being developed and the systems managed by it;

· adjustment of regulatory and management design documentation regarding the structure of the software life cycle, the procedure for organizing and conducting tests.

The implementation of these measures, of course, will require certain material and time costs.

On the other hand, an analysis of the contents of Table 1 and the experience of developing software and hardware complexes shows that, in the existing paradigm, up to 20-25% of particular functional requirements may fall out of the test. The cumulative effect of this factor leads to a decrease in the quality of testing, to an increase in the amount of necessary improvements in the process of program testing and trial operation, and to a delay in the development process. And, as the analysis confirms, these problems are generated precisely by the accepted paradigm of quality assessment, according to which a model of its assessment is formed and its indicators are refined.

The introduction of a new paradigm, taking into account the expected share of undetected software errors (the last two columns of Table 1) and their impact, according to DTI, on profit losses due to software quality problems, can potentially save up to 5% of the costs of an automated enterprise. And these are only material losses. In situations where software errors lead to human casualties, as in the cases of the Boeing crash, it is difficult to overestimate the cost of missed errors. The most important thing is that these savings and the prevention of potential accidents and catastrophes will be carried out without additional material costs, due to organizational measures to implement a refined quality control paradigm.

 

Conclusion

The analysis of the causes of software functioning errors carried out in the article showed that a significant proportion of them arise due to the problems of testing organization that do not allow identifying errors committed at the development stage before the appearance of critical malfunctions during operation. And a significant part of these problems, as practice has shown, is generated by the use of the existing paradigm for evaluating the quality of software products, which puts forward as a necessary and sufficient condition for quality assurance, the fulfillment of the requirements of the terms of reference.

The change of this paradigm proposed in the article, in which the fulfillment of the requirements of the TOR will remain only a necessary condition, and sufficient conditions will be supplemented with additional functionality and security of system manifestations necessary for the implementation of these requirements, can provide a significant increase in the probability of detecting errors at all stages of the development of programs and technical systems controlled using software. Abandoning the use of the existing paradigm will allow us to build a dynamic quality control process similar to the one currently being implemented to manage customer requirements in the agile methodology and used in the formation of FURPS+ requirements management models.

It should be noted that the proposed change in the quality control paradigm is not a direct borrowing of "flexible" approaches to managing customer requirements in the literal sense. No, this is a broader approach to evaluation, when during testing not only explicit requirements are checked, but also the conditions necessary for their implementation, as well as synergetic factors, which is a new solution to the well-known problem of forming a model for evaluating the quality of software products.

References
1. Tikhanychev O.V. On the quality indicators of software for automated control systems. Software systems and computational methods. 2020, no.2, pp.22-36. DOI: 10.7256/2454-0714.2020.2.28814 (in Russian).
2. Wagner S. Operationalized product quality models and assessment. The Quamoco approach. Information and Software Technology. 2015, no.62, pp.101-123; DOI: 10.1016/ j.infsof.2015.02.009.
3. The Sunday paper (tech ethics edition). Defense Tech. January 2008 [Electronic resource]. URL: htpp: //defensetech.org/2008/01/27/ (date accessed: 30.01.2008).
4. Tikhanychev O. V. On improving indicators for assessing the decision support systems' software quality. IOP Conference Series: Mater. Sci. Eng. 2020, no.919, 052009; DOI: 10.1088/1757-899x/919/5/052009.
5. Woodward S., Robinson P. One Hundred Days: The Memoirs of the Falklands Battle Group Commander. Harper Collins, London, 1997, 213 p.
6. Hyrum K. Wright. ESEC/FSE Doctoral Symposium '09: Proceedings of the doctoral symposium for ESEC/FSE on Doctoral symposium. August 2009 pp.27–28; DOI:10.1145/1595782.1595793.
7. Makartsev L.V. et al. Rational organization of the applied software development process as a prerequisite for successful automation of decision support. Software products and systems. 2017, no.4. pp.706-710. DOI: 10.15827 / 0236-235X.120.706-710 (in Russian).
8. Strubalin P.V., Fatyanova A.A. Quality management for Software. Vestnik SGSEU, 2019, no.2 (76), pp.108-111 (in Russian).
9. Bragina T.I., Tabunshchik G.V. Comparative analysis of iterative models of software development. Radioelectronics, Informatics, Management, 2010, no.2 (23), pp. 130-138 (in Russian).
10. Nikulina I.E., Nikolaenko V.S. The formation and Development of the concepts of Project Management and risk Management. Public Administration. Electronic bulletin, June 2018 (in Russian).
11. Bugorsky V.N., Goloskokov K.P. Quality management in the process of testing electronic equipment. Journal of Applied Informatics, 2011, no.1(31), pp.50-60 (in Russian).
12. Myers G., Badgett T., Sandler C. The Art of Software Testing, 3rd Edition. Moscow: Dialektika Publ., 2012. 272 p (in Russian).
13. Wiegers K. Creating a Software Engineering Culture. Addison-Wesley Publ. 2013. pp.211-212.

Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

The reviewed work is devoted to the current direction of quality control of software products in complex technical systems. The authors focus on the catastrophic consequences of software errors that were not detected at the testing stage in defense systems and weapons, and caused, for example, by rounding errors. It is noted that the decrease in software quality is caused not by the shortcomings of the testing methods themselves, but by its incorrect organization. The relevance of the work lies in the formation of approaches to software quality control, reducing decision-making errors and preventing the negative consequences of errors in its functioning. The authors analyze the requirements of quality standards and regulatory documentation, consider the means and methods of testing, and note the formal side of compliance with the requirements of the assignment developed by the software. The advantage of the work is the analysis of the sources of testing errors. The authors focus on errors related to the obviousness of a number of requirements for the customer and their absence in the assignment to the developer, which leads to errors in the development of the software algorithm. Scientific novelty is difficult to determine. The style of presentation. The article uses professional terminology, but the text lacks formulas and quantitative criteria for evaluating the results. There are no illustrations. The structure of the article generally meets the requirements for scientific publication. The bibliography contains 13 domestic and foreign sources, including in peer-reviewed publications. Remarks. The article does not mention the typical duration of the stages of creating software products, and the duration of the testing stage in the total duration of software development. The criteria for selecting the elements of the terms of reference and the details of its preparation are not mentioned. Table 1 shows an example from a specific industry, but the text of the article does not provide a justification for choosing it. Are these quantitative criteria typical for any software or a specific one? A significant part of the article is devoted to the organization of testing, but no algorithm is provided to reduce the probability of errors. What is the typical duration of the software error detection stage? The article is analytical in nature, the experimental part is missing. However, quantitative estimates are provided for this example. It is not entirely clear how they were obtained. It is advisable to supplement the article with an analysis of existing quantitative estimates or standard or error-finding algorithms proposed by the authors. Is it possible to reduce the probability of errors through qualitative research in the target audience? In conclusion, it is necessary to formulate solutions, recommendations or requirements for the source data clearly proposed by the authors. The bibliography should be designed in accordance with the requirements of the journal. The article is of interest to specialists in the field of software development, product testing, and solving applied problems using programming methods. The article needs to be finalized, after which it can be published in the journal "Software Systems and Computational Methods".