Translate this page:
Please select your language to translate the article


You can just close the window to don't translate
Library
Your profile

Back to contents

Software systems and computational methods
Reference:

On clarifying the concept of "power of attorney" of artificial intelligence systems

Tikhanychev Oleg Vasilyevich

ORCID: 0000-0003-4759-2931

PhD in Technical Science

Deputy Head of the Department of Advanced Development Management, "Technoserv Group"

13 Yunosti str., Moscow, 111395, Russia

to.technoserv@gmail.com
Other publications by this author
 

 

DOI:

10.7256/2454-0714.2024.3.44097

EDN:

JOPHLF

Received:

22-09-2023


Published:

05-10-2024


Abstract: The subject of the study is the concept of "power of attorney" of artificial intelligence controlling robotic means of varying degrees of autonomy. The relevance of the choice of the subject of research as the principles of the use of robotic systems for various purposes, including in a group, and the object of research, which are algorithmic problems arising in the implementation of group action algorithms, is determined by the existing contradiction between the need for joint use of robotic systems, primarily autonomous, and the complexity of the software implementation of this requirement. The implementation of robotics generates certain technological problems related to the efficiency and safety of algorithmic support of autonomous and controlled robotic systems. The manifestation of such problems may be application errors that reduce the effectiveness of joint actions. In robotics, the main potential reason for the appearance of such a situation is the insufficient effectiveness of existing group application control algorithms, determined by the low level of problem study. The article formulates a list of typical situations that determine the use of autonomous and controlled robots in a group with a leader. On the basis of the proposed classification, possible algorithms for controlling movement in the group are analyzed: both calculations for target maneuvering and for ensuring mutual security. The main situations concerning the types of maneuver are formulated and the mathematical apparatus for their calculations is described. Based on the overview analysis of typical algorithms for controlling movement in space, the formulation of the scientific problem of solving the problem of developing group algorithms and mathematical methods is synthesized.


Keywords:

robotic system, artificial intelligence, control software, trustworthy artificial intelligence, control of behavior algorithms, safe behavior, morality of robotic systems, ethics of robotic systems, mathematical security, safety behavior management

This article is automatically translated.

1 Introduction

One of the priorities ensuring the active development of autonomous robotics is the use of artificial intelligence (AI) elements. Currently, AI components contain most robotic systems (RTAs) with different levels of autonomy:

- limited autonomous, for example, drones that independently return to base when they lose contact with the operator;

- partially autonomous, with restrictions on the autonomous performance of certain functions, for example, the use of weapons for combat RTS, the decision for which remains with the operator;

- fully autonomous, solving tasks independently.

Equipping RTS with AI components, their further autonomization using the principles of learning and self-learning, increasingly distances their behavior from robots with external control, brings them closer and closer to reasonable, with all the pros and cons of these changes. At the same time, it is the increase in the autonomy of the RTS that is an objective trend based on the requirements for the use of robotics in a variety of fields.

At the same time, there remain a number of unresolved problems that periodically lead to accidents and catastrophes with human casualties in the industrial and transport sectors [1,2] and, of course, in the field of armed confrontation [3,4,5]. Many of these problems are determined by the unresolved problems of the subject area in terms of determining the security of AI systems. The term "power of attorney" currently used is not fully identical to the concept of "security" in the comprehensive understanding of this phenomenon, and no other approaches to describing this factor have yet been proposed.

At the same time, it is the description of the subject area that serves as the main basis for the development of theory and practice of any systems. And, at the same time, the development of autonomous systems cannot be stopped, it continues, as technological progress continues, despite some limitations reflected in official documents regulating the field of AI.

Thus, the problem of clarifying the subject area in terms of ensuring the security of the AI managing the RTS has not been solved and remains relevant, which makes the topic of the article relevant.

2 About the existing approach to the definition of "trusted" systems

Practice shows that the basis of any classification is the operating conditions of the described system.

A review of the list and conditions of occurrence of disasters and accidents caused by autonomous and partially autonomous PCT shows that all their causes can be combined into two large groups;

- conditionally natural, related to the decisions of the software controlling the RTS in case of incorrect formulation of the task or errors in recognizing the situation, or when the restrictions on the use of the system are incorrectly taken into account;

- random, determined either by errors in behavior algorithms not detected during testing, or by a limited time not making a decision that does not fit into the duration of the control cycle set during development and does not allow the RTS software to correctly work out the full control cycle.

An example of the first group of errors is the situation voiced by US Air Force Colonel Tucker Hamilton in a report at the Future Combat Air & Space Capabilities Summit conference with the AI's decision to kill its own operator, whom the combat RTS considered an obstacle to achieving its goal [6]. In documents regulating the development and use of AI, such errors are sometimes defined as bias or bias ("bias").

There are significantly more examples of the second variant of errors, namely accidents, both in the transport sector and in the field of robotic weapons [7,8], but their consequences are usually less dangerous.

The errors of the first group are critical, since they directly affect human safety, they require a reliable solution, which the scientific community is currently actively searching for. The formalized formulation of solving these problems is usually defined as the creation of a "trusted" artificial intelligence. Some research results on the creation of secure or "trustworthy" artificial intelligence are reflected in the European Commission document "Ethics Guidelines for Trustworthy AI" 2019 ("Ethics guidelines for trustworthy AI, 2019") and in the Russian standard GOST R 59 276-2020 "Artificial intelligence systems. Ways to ensure trust. General provisions". In these documents, artificial intelligence is declared trusted if it has the following properties: verifiability, manageability, stability, robustness, security and fault tolerance. When fulfilling these requirements, AI is proposed to be considered "verified by ethical characteristics".

However, these documents do not solve a number of problems in eliminating AI application errors, some of which are critical.

Firstly, they define continuous and unconditional control over the behavior of artificial intelligence by humans as one of the criteria of "power of attorney", a significant part of the requirements for trusted AI given in the documents considered can be considered the implementation of a human-centered approach to its development and implementation, based on three groups of principles: transparency, reliability, human-centricity.

In addition to the above, it can be noted that modern documents in the field of AI regulation recognize that the solution to most problems is to abandon the consideration of AI as a "black box" [9, 10], which should be ensured:

- careful control of the system at all stages of the life cycle;

- development of AI regulation mechanisms;

- development of codes and regulations on the use of AI.

However, strict adherence to these principles is problematic for autonomous robotic systems.: such control either severely limits their capabilities, or is technically difficult.

Secondly, it is not entirely logical to talk about the ethics of AI in the aspect defined by these documents. The AI's power of attorney for the content of these documents is more consistent with the security of their algorithms for humans, rather than a set of measures to prevent accidental behavior errors, which can be attributed to "ethics". For, in robotics, the concept of "ethics" rather corresponds to the priority consideration of the interests of interacting systems, including RTAs, even if it means sacrificing the optimality of one's own behavior.

Therefore, the "power of attorney" of AI can be defined, nevertheless, as an analogue of human morality interpreted for the rules of RTS behavior, potentially solved by means of security control of RTS software algorithms and the formation of a system of restrictions on their behavior [11]. Variants of such approaches are described in [12,13]. In any case, one way or another, the problem of safe behavior is gradually being solved, although not in full and with certain terminological flaws.

The situation is more complicated with the prevention of errors of the second type. They are less critical from the point of view of security, less obvious for analysis. To solve them, it is necessary to develop more flexible rules for building control algorithms and a system of restrictions, the prototype for which, with a certain degree of assumptions, can be considered rules not of morality, but of human politeness, interpreted for a "trusted" AI.

The emerging contradictions in terminology, despite their apparent insignificance, make it difficult to develop unified rules of conduct for RTAs and algorithms for their implementation. This situation requires a solution that ensures the logical development of the theory and practice of algorithmization of the AI controlling the RTS.

Taking into account the revealed contradiction in terminology, based on the method of analogies and projecting some rules of human behavior onto algorithms and limitations of artificial intelligence behavior, it is proposed to formulate the task of implementing a set of rules of "morality" and "politeness" for various aspects of the safety of using RTAs of varying degrees of autonomy, providing clarification of regulatory mechanisms and codes of conduct of AI. In the article, taking into account the lesser knowledge of the problem of random errors of AI behavior, a variant of implementing rules to prevent the occurrence of random errors of behavior arising from the uncertainty of the initial data, including due to insufficient time for decision-making, is considered.

3 Clarification of the concept of "power of attorney" in terms of the security of mutual behavior

A typical example of a random error is the occurrence of a collision risk of ground-based autonomous RTAs in the process of joint maneuvering with errors in accounting for mutual movement.

A model for the occurrence of such a situation can be considered using a simple example. Let's say two autonomous vehicles moving at the same speed have to make a turn around an obstacle. The vehicles maneuver independently without working in a group. For each of these tools, he builds an optimal trajectory for himself, with a high probability these trajectories will intersect at the turning point. If these vehicles are equipped with safe divergence systems, a collision is unlikely – these systems will determine the danger and form an emergency divergence maneuver or an emergency speed change according to the RTS. But, the problem lies precisely in the fact that the discrepancy will be critical, it will be necessary to form divergence options, change the speed or trajectory in a very short period of time, there will be no question of the optimality of the route. In such conditions, the probability of a successful divergence will not be one hundred percent.

This is a typical case of uncoordinated interaction of moving agents.

In such a situation, if the AI in the composition of the maneuvering RTAs had calculated the situation in advance, an emergency discrepancy could have been avoided by specifying the movement parameters in advance, taking into account the potential problem. By deviating slightly from the optimal trajectory at an early stage of planning, it would be possible to avoid significant losses of optimality or the risk of a collision during divergence.

The situation is described in a somewhat simplified form, in reality the background situation may be more complicated, but to clarify the essence of the problem, this approach is acceptable.

In the described formulation, to solve the problem of determining the parameters of the maneuver, several options can be used, calculated in advance using a well-known mathematical apparatus:

- plan for mutual divergence by changing the trajectory and/or speed of both RTAs;

- plan the divergence of one of the systems, which has fewer external restrictions for maneuver.

The algorithm of "politeness" or "ethics" in the above situation can be mathematically implemented as calculations based on typical variants of the discrepancy carried out with respect to the RTS correcting the trajectory [14,15].

Technically, the simplest way to diverge is to reduce the speed of one of the participants by V = V initial V tr from the initial to the required one, providing a divergence at a distance of S.

The required speed reduction is easily determined from the equation of motion of a material point, taking its motion as rectilinear and equidistant and using as initial data the initial velocity of motion V initial and the distance to the point of rotation (divergence) S:

In addition to changing the speed, it is possible to ensure divergence by changing the trajectory of movement, shifting the turning point, ensuring divergence from behind or in front at a minimum safe distance.

Calculations for the removal of the turning point away from the intersected trajectory can be described by the known equations of exit to a given point in space, remote from the trajectory of another maneuvering at least by a safe distance of L min. The calculated heading angle of maneuvering q M in such a situation is defined as (Figure 1):

qM = qM0 α;

where,

V M is the speed of the maneuvering;

V T is the velocity of the RTS, which does not change the trajectory.

Fig. 1. The scheme for calculating the discrepancy at a given distance

In the picture:

K M – maneuvering course;

K T is the course of the RTS, which does not change direction;

q is the initial heading angle of the RTS, which does not change the direction of movement;

q mo is the initial heading angle of the maneuvering;

q m is the calculated heading angle of the maneuvering;

T 0 and M 0 are the points of the initial position of the RTS;

T0 is the point of divergence.

Calculations based on the divergence at the minimum distance behind or on the course on the intersected trajectory can be implemented through the equations of divergence at the maximum distance along the course or at the minimum distance behind L min:

where Q is the critical heading angle, the limit at which the divergence can be carried out.

The Q value is calculated from the ratio of the velocity vectors:

Thus, there is a fairly wide range of mat equipment that allows you to implement a "polite" discrepancy in different situations.

Only the calculation options for solving the problem that implement special cases are described above. In general, such "politeness" should be implemented by a computational and analytical algorithm calculated by the RTS before each maneuver:

1) environmental assessment, identification of potentially interacting RTAs and biological objects;

2) analysis of coordinates and motion parameters of potentially interacting objects;

3) building an optimal trajectory of your own movement;

4) forecast of possible intersections;

5) making a decision on the need to correct the trajectory or speed;

6) choosing the correction method;

7) calculation of the specified motion parameters.

Mathematically, as the examples show, this algorithm can be provided by existing methods, the problematic issue remains only the formation of rules for the implementation of "polite" behavior, namely, who should perform the evasion maneuver. This task is more complicated, lying on the border of mathematics and logic.

Of course, it can be argued that such tasks are already being solved by the development and application of sets of rules and regulations implemented in specialized documents: traffic rules, air and water codes. But this assumption is not entirely true. Firstly, movement and maneuvering, special cases of the use of RTAs, in fact, the range of their application is much wider. Secondly, these special cases are implemented under conditions of significant restrictions, determined, in particular, by the boundaries of the road surface, markings, preset routes and echelons. In such conditions, a fairly simple set of rules can be formed, such as "divergence of the left sides", "interference from above" and the like.

Much more complex and less formalized situations arise in all other cases characteristic of the functioning of RTAs of varying degrees of autonomy. Moreover, in addition to poorly formalized decision-making conditions, these situations are usually complicated by limited time for its adoption.

In such conditions, it is necessary to develop and implement dynamic algorithms for solving problems of discrepancy with poorly formalized initial data.

Thus, when developing algorithms for RTS software, a non–trivial scientific task arises - the development of rules of mutual behavior, an analogue of human etiquette. At the same time, the program "etiquette", like the usual one, can be divided into situational and professional, and the latter can be divided into fields of application with appropriate variations of actions. And, by the way, maneuvering behavior, in the proposed formulation, is part of professional, namely transport, "etiquette". And everything that is associated with a deliberately dangerous use for others should be solved within the framework of a system of restrictions and prohibitions similar to human morality.

With this approach, the limitations and, accordingly, the content of the concept of "trusted" AI will be based on two sets of rules: analogues of "etiquette" and "morality", with the choice of the type of professional etiquette appropriate for each of the situations under consideration (Figure 2).

Fig. 2. The scheme of formation of mutual behavior

The algorithm shown in the figure is based on the lack of information exchange between objects. At the same time, the development of information technology theoretically makes it possible to organize such an exchange and then the algorithm will be somewhat simplified, since:

- there will be fewer uncertainties in the behavior of other participants in the movement;

- it will be possible to "negotiate" with other participants on the parameters of mutual discrepancy.

That is, the situation will be reduced to a simpler version of coordinated interaction.

But, in any case, an algorithm of mutual behavior based on taking into account the movement parameters of all its participants with the deliberate deviation of some of them from the optimal parameters, that is, the prototype of the "politeness" of behavior, is objectively necessary and requires further development. As required, the continuation of the development of the previously mentioned security algorithms within the framework of the rules of conduct of the RTS, built by analogy with "morality".

Thus, the use of analogies of "morality" and "politeness" of the RTS will allow solving a number of important problems of the safety of the behavior of autonomous robots, especially in poorly formalized interaction conditions, and the basis for such a solution should be just the proposed clarification of the subject area.

4 Conclusion

The proposed interpretation of the concepts of "morality" and "politeness" for the trained software algorithms that control autonomous and partially autonomous robotic systems is only one of the solutions to the problem, allowing them to build a security system based on groups of rules and dynamically generated restrictions.

To ensure the solution of the tasks of integrated security of AI-controlled RTAs, it is necessary to clarify the conceptual framework used in this subject area. Moreover, we are not talking about a direct copy of human concepts of morality and politeness, but about clarifying the concept of "trusted" AI.

This task can be solved in two ways:

- leave the content of the definition of "trusted" AI as synonymous with AI with guaranteed safe behavior algorithms, adding the definition of "polite" AI that ensures safe behavior in a group application based on the use of "ethical" algorithms;

- clarify the definition of "trusted" AI by adding the property of "ethical" behavior, which consists in predicting the results of intended actions and verifying them based on a set of rules.

In any case, the proposed clarification of the classification will serve as a prerequisite for solving an important scientific and practical task of ensuring the integrated safety of AI-controlled RTAs. Moreover, the adoption of the proposed changes to the concept of "power of attorney", in the future, should ensure the transition from an approach based on "training" AI to an expanded version, adding "education" to it, which will also serve to increase the safety of AI and robotic systems controlled by it.

References
1. Simulin, A and etc. (2015). Some aspects of the use of robotics in military affairs. Conference Collection Center Sociosphere, 27, 67-71. Retrieved from http://sociosphera.com/files/conference/2015/k-05_10_15.pdf
2. Chirov, D. & Novak, K. (2018). Promising areas for the development of special-purpose robotic systems. Security Issues, 2, 50-59. Retrieved from https://doi.org/10.25136/2409-7543.2018.2.22737
3. Khripunov, S, Blagodaryashchev, I. & Chirov, D. (2015). Military robotics: modern trends and development vectors. Trends and Management, 4, 410-422.
4. Pflimlin, É. (2017). Drones et robots: La guerre des futurs. (France: Levallois-Perret).
5. Roosevelt, Ann. (2017). Army Directs Cuts, Adjustments, To FCS. Defense Daily.
6. Hamilton, T. (2023). How AI will Alter Multi-Domain Warfare. Future Combat Air & Space Capabilities Summit, 4. Retrieved from https://www.aerosociety.com/events-calendar/raes-future-combat-air-and-space-capabilities-summit.
7. Tikhanychev, O. (2022). Influence of the Problem of Safety Control of Heuristic Algorithms on the Development of Robotics. In: Shamtsyan, M., Pasetti, M., Beskopylny, A. (eds) Robotics, Machinery and Engineering Technology for Precision Agriculture. Smart Innovation, Systems and Technologies, 247 (Singapore: Springer). Retrieved from https://doi.org/10.1007/978-981-16-3844-2_31
8. Beard, J. (2014). Autonomous weapons and human responsibilities. Georgetown Journal of International Law, 45, 617-681.
9. Tikhanychev, O. (2023). The Control System of Heuristic Algorithms as a Prototype of the "Morality" of Autonomous Robots. International Scientific Forum on Sustainable Development and Innovation. WFSDI-2023. https://doi.org/10.1007/978-981-16-3844-2_31
10. Ćwiąkała, P. (2019). Testing Procedure of Unmanned Aerial Vehicles (UAVs) Trajectory in: Automatic Missions. Appl. Sci. 9, 3488. Retrieved from https://doi.org/10.3390/app9173488
11. Johnson, D. (2016). Computer Systems: Moral entities but not moral agents. In: Ethics and Information Technology 8, 195-204. Retrieved from https://doi.org/10.1007/s10676-006-9111
12. Schuller, A. (2017). At the Crossroads of Control: The Intersection of Artificial Intelligence in Autonomous Weapon Systems with International Humanitarian Law. Harvard National Security Journal, 8, 379-425.
13. Ukhobotov, V. & Izmestyev, I. (2016). On a pursuit problem under resistance of a medium. Bulletin of the South Ural State University series "Mathematics. Mechanics. Physics", 8(2), 62-66. Retrieved from https://doi.org/10.14529/mmph160208
14. Dubanov, A (2021). Simulation of the trajectory of the pursuer in space with the method of parallel approach. Program systems and computational methods, 2, 1-10. Retrieved from https://doi.org/ 10.7256/2454-0714.2021.2.36014
15. Tikhanychev, O. (2023). Self-Check System of Heuristic Algorithms as a "New Moral" of Intelligent Systems AIP Conference Proceedings, 2700, 040028. Retrieved from https://doi.org/10.1063/5.0124956

Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

The reviewed article is devoted to clarifying the conceptual apparatus used in intelligent information systems, in particular the concept of "power of attorney" of artificial intelligence systems. The research methodology is based on the generalization of publications by foreign and domestic scientists on the issues considered in the article, the use of the method of analogies and mathematical modeling of the risk of collision of ground-based autonomous robotic systems in the process of joint maneuvering with errors in accounting for mutual displacement. The authors attribute the relevance of the work to the fact that one of the priorities ensuring the active development of autonomous robotics is the use of artificial intelligence (AI) elements, the presence of an unresolved problem in terms of determining the safety of AI systems. The scientific novelty of the work, according to the reviewer, consists in the proposed interpretation of the concepts of "morality" and "politeness" for the trained algorithms of software controlling autonomous and partially autonomous robotic systems. Structurally, the following sections are highlighted in the article: Introduction, On the existing approach to the definition of "trusted" systems, Clarification of the concept of "power of attorney" in terms of the security of mutual behavior, Conclusion, Bibliography. The authors point out that the term "power of attorney" currently used is not fully identical to the concept of "security" in the comprehensive understanding of this phenomenon, and other approaches to describing this factor have not yet been proposed. The article provides an overview of the list and conditions for the occurrence of disasters and accidents caused by autonomous and partially autonomous robotic systems, outlines the algorithm of "politeness" or "ethics" in mathematical language in the form of calculations based on typical divergence options conducted with respect to robotic systems that correct the trajectory of ground-based autonomous robotic systems in the process of joint maneuvering. The text of the publication is accompanied by five formulas and illustrated with two figures: "A scheme for calculating the discrepancy at a given distance", "A scheme for the formation of mutual behavior". In conclusion, two solutions to the problem of clarifying the concept of "trusted" artificial intelligence are indicated: firstly, to leave the content of the definition of "trusted" AI as synonymous with AI with guaranteed safe behavior algorithms, adding the definition of "polite" AI, which ensures safe behavior in a group application based on the use of "ethical" algorithms; secondly to clarify the definition of "trusted" AI by adding the property of "ethical" behavior, which consists in predicting the results of intended actions and verifying them based on a set of rules. The bibliographic list includes 15 sources – scientific publications on the topic in English and Russian. The text of the publication contains targeted references to the list of references confirming the existence of an appeal to opponents. Among the reserves for improving the article, it should be noted the need to number formulas and design them in accordance with the accepted rules. With the decoding of the symbols used immediately after the mathematical expression to improve and facilitate the perception of the material by readers. The topic of the article is relevant, the material reflects the results of the research conducted by the authors, contains elements of increment of scientific knowledge, corresponds to the topic of the journal "Software Systems and Computational Methods", may arouse interest among readers, it is recommended for publication after completion of formulas.