Software systems and computational methodsReference:
Features of the organization and classification of virtual reality interfaces
Abstract: The subject of the study is the features of the organization of virtual reality interfaces. The author examines in detail such aspects of the topic as user involvement in the virtual environment, various ways and scenarios of user interaction with virtual reality, user security in the virtual environment, as well as such a phenomenon as cyberbullying and ways to prevent it. The study also considers the use of voice control as an alternative to manual. Particular attention in this study is paid to the classification of virtual reality interfaces, among which sensory interfaces, interfaces based on user motor skills, sensorimotor interfaces, interfaces for modeling and developing virtual reality are distinguished and considered in detail. The main conclusion of the study is that the virtual reality interface should be designed taking into account the ergonomics of users to prevent muscle fatigue and cyber-pain. In addition, it is very important to ensure the user's safety when designing virtual environment interfaces: using the virtual reality interface should not lead to injury to the user. To create an ergonomic and secure virtual reality interface, a combination of different types of interfaces is often required, through which the user can access an alternative control method or improved navigation. A special contribution of the author to the study of the topic is the description of the classification of virtual reality interfaces.
Keywords: Virtual reality, Voice control, Sensorimotor interfaces, Touch interfaces, Cyberbullying, User engagement, Graphical interfaces, Muscle fatigue, User motor skills, Interface design
This article is automatically translated. You can find full text of article in Russian here.
Virtual reality (VR) is very often defined as a set of technologies that allow people to immerse themselves in a world beyond reality  and is often associated with immersive technologies that are intended for the most complete immersion in an artificial world created by a computer by delivering information to the senses, mainly sight, hearing and touch .
To achieve the necessary visual effect in VR, numerous variants of head-mounted displays [3, 4, 5] (English, Head-Mounted Display, or HMD), single (Powerwall VR) or combined projection screens (CAVE) are very often used . At the same time, the sound effect can be supported using headphones, speakers or a full-fledged sound system, and interaction with the virtual environment is carried out using special sensors and tracking systems (optical, magnetic, ultrasonic, inertial, etc.), which allow you to calculate the position and orientation of physical objects in space in real time.
Currently, VR-related technologies are being used in many industries, including education , medicine , industry , the army [10, 11], the games and entertainment industry, allowing users to interact with the virtual world and process data obtained from it. Data processing is performed in various ways applicable to solving and understanding complex problems. Thus, BP increases the level of productivity through technological modernization and innovation, being a key element in the development of many industries.
When a user interacts with a virtual reality system, the organization of the interface is very important. This problem is very relevant in the design of VR systems and is discussed further.
1 Features of the organization of virtual reality interfaces
1.1 Categories of user activities in a virtual environment
Since virtual reality technology involves partial or complete immersion of the user in a virtual environment, there are many features of the organization of this type of human-machine interaction, which follow from the capabilities of the human senses.
This is because when a user is in a virtual environment, he has to interact with it. In such an interaction, elementary, sensorimotor and cognitive activities are distinguished. In general, user actions when interacting with the virtual environment can be grouped into four categories [12, p. 26]:
· observation of the virtual world;
· moving in the virtual world;
· impact on the virtual world;
· communication with other users, the virtual world or with the application.
1.2 Field of view as an important indicator of user engagement
One of the most important aspects of virtual reality is the field of view, which depends on the horizontal and vertical size of the display. User engagement largely depends on this indicator: the wider the field of view, the larger it is .
The field of view is of two types: monocular and binocular. Monocular is the field of view of one eye, located within 170 ° - 175 °, which consists of an angle from the pupil to the nose (60 ° - 65 °) and an angle from the pupil to the temple (100° - 110 °). The binocular field of vision is a combination of two monocular fields of vision, and provides a person with a visible area of about 200° - 220° .
At the intersection of two monocular fields of vision, a stereoscopic binocular field of vision  (about 100 °) appears, at which a person can perceive objects in 3D. This field of view is the most important, since it is within its limits that a person sees most of the objects of the natural and virtual environment.
Thus, when designing virtual reality interfaces, it is necessary to take into account the field of view so that the elements of the graphical interface are always visible to the user. Theoretically, a virtual reality device should maintain a field of view of 170° horizontally and 145° vertically to ensure complete immersion in the virtual environment, but in practice, the field of view in many modern HMD displays does not exceed 120° horizontally.
When designing virtual reality interfaces, it is also necessary to take into account that the total field of view when moving the head and eyes is very wide: horizontally it is more than 200° from the side of the temple and about 130° from the side of the nose, as well as 140° up and 170° down vertically [12, p. 68]. Thus, the main content should be located in the optimal zone (the distance between the object and the user is from 1.25 m to 5 m), taking into account the user's position (sitting, reclining, standing or walking) .
1.3 Methods and scenarios of user interaction with virtual reality
The user's interaction with the virtual reality interface occurs mainly with the help of various control panels, controllers , special tactile gloves, as well as with gestures . The use of such controls in three-dimensional space is carried out thanks to special tracking systems .
Among the main possible scenarios of user interaction with the VR interface, the following can be distinguished :
· object selection (the object must be selected before the actual action can be performed with it);
· manipulations with the selected object, i.e., the use of functions that are available after its selection;
· placement and movement of objects, i.e., their free positioning anywhere in the horizontal plane and rotation around the vertical axis;
· creation or modification of objects, i.e., the use of functions that allow you to choose between predefined parameters, among which may be, for example, the type of object being created, size, weight, color, etc.;
· data entry, i.e., text input, selection of selected objects in virtual space, etc.
1.4 Voice control as an alternative to manual
When designing the interface of a VR system, it should be borne in mind that normal interaction with it may be difficult in cases where the user is already working with a virtual environment. For example, a scenario is possible when a user of a virtual reality training application holds a tool in his hands, studying its capabilities and ways of application. In this case, it may be inconvenient or even impossible for the user to call for help on this tool, because his hands are already busy.
In such applications, it is necessary to provide support for voice control, provided with the help of special built-in microphones. Another possible way to interact with the interface of a virtual reality application, which can be used as a substitute for manual control, is gesture recognition. In a particular case, the virtual reality interface implementing the gesture recognition technique can be focused on tracking the movement of the eyes or head .
1.5 Prevention of muscle fatigue
When developing a VR interface based on gesture recognition, it is very important to avoid muscle fatigue of the user. Such a phenomenon can manifest itself, for example, when holding a hand or head in a certain position for a long period, or repeating gestures for a long period of time when interacting with a virtual reality interface.
In order to avoid muscle fatigue of the user, support for combinations of gestures and voice input should be introduced. In addition, it is very important to correctly position the interactive elements, placing them at the right height and in the right place for the user, ensuring convenient interaction with the virtual reality system .
1.6 Safety of using virtual reality
It is also necessary to think over the VR interface from the point of view of safety of use, to avoid potentially dangerous movements and strategies of user behavior when interacting with the interface, because people very often lose touch with the real world while in virtual reality .
If necessary, the interface should be changed so that the user's dangerous movements are replaced by gesture recognition or voice control.
1.7 Virtual Environment Navigation
Another problem of building virtual reality interfaces is the possibility of convenient movement in it and interaction with objects located at a distance. According to research, navigation in a virtual environment affects the sense of presence , but this functionality is often limited by the capabilities of the VR system itself and the headset used by the user.
One possible solution is to use standard controls, such as a joystick, keyboard or mouse, as well as adapting such devices to a virtual environment.
1.8 Cyberbullying and ways to prevent it
An essential feature that should not be neglected when designing VR interfaces is cybersickness  (Eng., cybersickness). Symptoms of cyberbullying are: nausea, headache, pallor, dry mouth, disorientation, vomiting . Cyberbullying occurs when a user visually perceives that he is moving in a virtual environment despite the fact that he remains physically motionless.
Therefore, using a standard control device, such as a mouse or keyboard, can lead to cyber-pain, causing a conflict in the sensor system. In such cases, moving in a virtual environment at a constant speed in the direction of the user's gaze is used.
Another possible way to move a user in virtual reality is teleportation. Using this method, the user can freely move from one place to another, avoiding the occurrence of cyber-pain. The main points or locations can be predefined in the application, and can be selected in various ways, including standard control devices such as a joystick or mouse .
2 Classification of virtual reality interfaces
According to the classification of interfaces described in , there are four main groups of interfaces described below.
1. Interfaces for modeling and development:
1.1. based on the digitization of real objects;
1.2. based on special software for object modeling, such as computer-aided design (CAD) systems, specialized programming languages;
1.3. based on virtual constructors of object forms, with the help of which it is possible to create objects of any shape.
2. Touch interfaces:
2.1. graphical, i.e., stereoscopic and monoscopic graphical interfaces, in which virtual reality glasses and helmets, various monitors, projection displays are used to interact with graphics;
2.2. voice, i.e., based on speech and sound recognition;
2.3. touch interfaces (English, touch interfaces), i.e., built on the basis of user touches to interface elements, in which various modifications of virtual reality gloves, joysticks, etc. are used for user interaction with the system.;
2.4. interfaces based on the sense of smell.
3. Interfaces based on the user's motor skills:
3.1. based on the determination of the user's location and orientation, in which special sensors are used;
3.2. based on finger movement detection technology, in which virtual reality gloves are actively used;
3.3. based on the user's walking analysis technology;
3.4. based on user motion capture (Eng., motion capture interfaces), using virtual reality suits, special sets of sensors, image analysis systems;
3.5. command interfaces in which the following types of control are carried out: voice, manual (using a computer mouse, joystick, stylus), using feet (pedal control);
3.6. based on the user's movement, which are based on the use of roller skates, mobile platforms, gyroscopes;
3.7. based on face capture technology, with tracking facial expressions, eye and lip movements.
4. Sensorimotor interfaces, which are command interfaces with feedback, in which various kinds of manipulators, joysticks, virtual reality gloves, exoskeletons are used for control.
As can be seen from the classification presented above, virtual reality interfaces have a wide variety and application. In addition, it should be noted that often the types of interfaces mentioned above are combined, replacing or complementing each other. Next, let's look at some of the main types of VR interfaces in more detail.
3 Interfaces for virtual reality modeling and development
Interfaces for VR modeling and development represent the VR development environment, i.e. how the developer sees virtual reality and interacts with it.
One of the easiest ways to transfer real–world objects to a virtual environment is to digitize them using special 3D digitizers (Eng., 3D digitizer). The result is a three-dimensional model that can be interacted with at the software level. Such solutions find their application, for example, in the design of interior items or tableware, on digitized models of which various inscriptions can be applied using special tactile devices. An example of such a solution  is shown in Figure 1.
Figure 1 – Tactile device and its application in virtual reality
Figure 1 shows the tactile device "Phantom Desktop" (left) and its application for applying Chinese characters to a digitized bowl (right).
When modeling virtual objects using a virtual environment, the creation of a form is achieved using virtual processing of matter, the shape, texture and density of which can be physically felt. This method allows sculptors, designers or computer graphics specialists to create complex forms of virtual objects without using special development tools.
4 Virtual Reality Touch interfaces
Sensory interfaces are used in virtual reality so that the user has the opportunity to feel the objects of the virtual world, thereby enhancing the effect of immersion in the virtual environment.
4.1 Graphical interfaces
The graphical interfaces of VR can be 2D and 3D. The 2D interface is usually a set of buttons on a two-dimensional surface. An example of such an interface can be, for example, the settings menu. It is worth noting that a two-dimensional interface in this case can be integrated into a three-dimensional virtual environment. When using 2D interfaces, they usually use the selection of selected objects, and also display a special pointer (for example, by analogy with a mouse pointer).
The 3D interface in VR is used in order to bring the possibilities of interaction as close to real life as possible in the context of the virtual environment itself. The use of a 3D interface is effective when it is necessary to simulate physical interaction in reality. An example of 2D and 3D interfaces in virtual reality  is shown in Figure 2.
Figure 2 – Example of 2D and 3D interfaces in virtual reality
Figure 2 shows a three-dimensional interface (a) for selecting a tool from a desktop in an enterprise using a special controller. After selecting this tool, the settings menu (b) opens, implemented in 2D. This example shows how 2D and 3D interfaces can be combined in virtual reality.
The organization of a 3D interface in virtual reality is a very urgent problem and is being actively investigated at the moment [30-32].
4.2 Voice interfaces
Voice interfaces are quite often used in virtual reality [20, 33, 34]. With the help of voice control, the user can control his position in the virtual environment, zoom in or rotate an object, change it, etc. Voice control can be used both as the main, and as an additional or alternative.
In fact, interactive speech systems of virtual reality can be considered as instances of an indirect control interface, i.e. an interface in which the user delegates a task to a computer, which, in turn, initiates (and controls) actions in the system and determines the order of solving the problem .
Interactive virtual reality applications are distinguished by the complexity of the subject area and the corresponding increase in the complexity of the language used. VR systems using voice interfaces should minimize both user errors and command recognition errors by asking questions to obtain information not provided by the user, but necessary to complete the task and eliminate further problems in the human-machine dialogue.
The main current problems in the development of virtual reality interfaces with voice control are: speech recognition, understanding of control commands, as well as the organization of human interaction with the virtual environment as a whole.
So, when implementing speech recognition, it should be borne in mind that the larger the user's vocabulary, the greater the probability of recognition errors, i.e., the problem lies in the need to limit the user's language while maintaining comfortable interaction with the virtual reality system.
Problems with understanding control commands are primarily related to understanding the context on the basis of which they are interpreted, i.e., with the computer's understanding of the user's spoken speech.
The organization of human interaction with virtual reality through the voice control interface is often reduced to the implementation of the VR system as an information agent with which the user interacts. The main problem in this case is that the user understands what he is interacting with and that this interaction would look like a dialogue between a person and the VR system, most similar to natural human communication.
4.3 Touch interfaces
Touch interfaces [36, 37], for interaction with which various modifications of virtual reality gloves and joysticks are used, are actively used in medicine, robotics and many other areas in which touch is necessary for a better understanding of the virtual world.
Touch is central to human perception and interaction with the environment, playing a vital role in the formation and maintenance of close social ties and well-being .
This type of interfaces is based on the fact that a huge number of different kinds of receptors are placed on the human skin [12, pp. 70-75], which increases the user's immersion in the virtual environment and improves management.
5 Interfaces based on the user's motor skills
The role of virtual reality interfaces based on the motor skills of users is to provide the user with the opportunity to influence the objects of the virtual world by transmitting information to the computer about the gestures and speech of the user concerning the objects of the virtual world .
5.1 User location and orientation-based interfaces
The principle of determining the exact position of HMD displays, controls and the user's body in Euclidean space is used in virtual reality interfaces based on determining the user's location and orientation . Since the purpose of such interfaces is to simulate the perception of reality, it is extremely important that positional tracking is accurate and accurate, in order to avoid violating the illusion of being in a three-dimensional space.
To do this, sensors are used that repeatedly register signals from transmitters on or near the monitored object(s), and then send this data to a computer to maintain their approximate physical location .
5.2 Interfaces based on user motion capture technology
Virtual reality systems with interfaces based on user motion capture technology are often used in medicine to simulate operations, as well as in the automotive industry (mainly for virtual assembly), in the film industry (to capture human movement and create animation with captured movements), in the gaming industry and many other areas .
There are several possible approaches to the implementation of a virtual reality system based on motion capture technology: optical, mechanical, magnetic and inertial. The most commonly used approach is to use optical motion capture technologies, as shown in Figure 3 .
Figure 3 – Example of a virtual reality system based on optical motion capture technology
The VR system shown in Figure 3 is arranged as follows. Special markers are placed at certain points on the user's body, and video cameras in fixed positions track their movement. The collected data is subsequently analyzed by image processing software.
The approach based on optical motion capture technology makes it possible to obtain very accurate data on movement inside a specially designed room, provided that a large number of surveillance cameras are used. This approach, which has proven itself well when used in specially prepared rooms, is not applicable in open spaces. Recently, relatively inexpensive optical motion tracking technologies have been investigated, which are attractive because they do not require markers or special settings.
5.3 Interfaces based on face capture technology
Virtual reality interfaces based on face capture technology allow the virtual reality system to collect the user's facial movements and determine his mood, such as joy, sadness, fear, anger, surprise, dislike, reverie, distrust, concern, withdrawal, suspicion, etc., with its subsequent transfer to the virtual environment. Interfaces of this type can be divided into three categories: interfaces for analyzing facial movements, eye movements, lip movements [43-45].
An example of such an interface is shown in Figure 4.
Figure 4 – Determination of user emotions based on face capture technology
Figure 4 illustrates a virtual reality interface based on face capture technology: a user wearing an HMD display, which is used to track facial expressions (A); the inside of the HMD display, with infrared LEDs visible along the radius of the eyepieces, highlighted with red circles (B); the image of the user's eyes (C) and the output a dynamically generated avatar (D) that displays the user's mood.
6 Sensorimotor interfaces
The role of sensorimotor interfaces of virtual reality is to transmit the user's movements to the virtual environment and receive a tangible response by the user, which can be represented, in addition to the image, in the form of various vibrations, shocks and sounds.
Interfaces of this kind have some similarities with motion simulators that transmit changes in orientation and acceleration to the user. They are used when it is necessary to create the illusion of materialization of objects present in a virtual environment: a response is given to a part of the body in contact with a virtual object (for example, a push or vibration), which the user might have felt when interacting with a similar object in the real world. This effect is used to improve the user's immersion in the virtual environment.
Sensorimotor interfaces usually include controls such as manipulators, joysticks, virtual reality gloves, exoskeletons, etc. The main problem of building such interfaces is that they must be built on a solid frame, which usually causes the user some inconvenience by restraining his movements .
An example of such an interface is shown in Figure 5 .
Figure 5 – Sensorimotor interface of virtual reality "VibroTac"
The sensorimotor virtual reality interface "VibroTac" presented in Figure 5  includes two manipulators with which the user can carry out virtual assembly and maintenance check. Thanks to such a virtual reality interface, engineers can assess the maintainability or the possibility of mounting mechanical parts in complex products such as cars or airplanes.
"VibroTac" notifies the user about the contact of his hand and objects in virtual reality: upon contact, the user is notified by vibration of the manipulator and a specific sound similar to a knock. Moreover, the deeper the user's hand penetrates into the virtual object, the stronger the user feels the vibration.
Virtual reality technologies are among the most promising at the moment and are used in many areas of industry, often in the form of various simulators and training applications, as well as in the army, medicine and entertainment industry.
In this paper, the main types of virtual reality interfaces were considered, including sensory, sensorimotor, touch interfaces and interfaces based on the user's motor skills.
The paper describes the features of the organization of virtual reality interfaces, while it was noted that in order to create high-quality, well-designed and user-friendly interfaces, it is necessary to take into account many user characteristics, including cognitive features, sense of smell and touch, anthropometric features.
The virtual reality interface should be ergonomic, the user should not experience muscle fatigue. To do this, it is possible, and in practice, a combination of different types of interfaces considered is often used, which provides the user with alternative management, improved navigation, and also improves the quality of use of the application as a whole.
In addition, the paper concludes that it is necessary to design virtual reality interfaces taking into account user safety. The interface and user behavior in the virtual environment should not lead to injuries, falls, etc.
The paper also considers such a phenomenon as cyberbullying, which manifests itself in the form of nausea and headache and is associated with the fact that the user has a feeling of movement at a time when his body remains motionless. The interface of the VR application should provide smooth navigation through the virtual environment in order to avoid such phenomena.