Translate this page:
Please select your language to translate the article


You can just close the window to don't translate
Library
Your profile

Back to contents

Software systems and computational methods
Reference:

Features of the organization and classification of virtual reality interfaces

Kiryanov Denis Aleksandrovich

ORCID: 0000-0001-8502-8333

Master's Degree, Department of Information Systems and Software Engineering, Baltic State Technical University "Voenmeh" anmed after D. F. Ustinov

190005, Russia, Saint Petersburg, 1st Krasnoarmeyskaya str., 1

dennis.kiryanov@gmail.com
Other publications by this author
 

 

DOI:

10.7256/2454-0714.2022.2.38214

EDN:

ZPEWAU

Received:

06-06-2022


Published:

05-07-2022


Abstract: The subject of the study is the features of the organization of virtual reality interfaces. The author examines in detail such aspects of the topic as user involvement in the virtual environment, various ways and scenarios of user interaction with virtual reality, user security in the virtual environment, as well as such a phenomenon as cyberbullying and ways to prevent it. The study also considers the use of voice control as an alternative to manual. Particular attention in this study is paid to the classification of virtual reality interfaces, among which sensory interfaces, interfaces based on user motor skills, sensorimotor interfaces, interfaces for modeling and developing virtual reality are distinguished and considered in detail.    The main conclusion of the study is that the virtual reality interface should be designed taking into account the ergonomics of users to prevent muscle fatigue and cyber-pain. In addition, it is very important to ensure the user's safety when designing virtual environment interfaces: using the virtual reality interface should not lead to injury to the user. To create an ergonomic and secure virtual reality interface, a combination of different types of interfaces is often required, through which the user can access an alternative control method or improved navigation. A special contribution of the author to the study of the topic is the description of the classification of virtual reality interfaces.


Keywords:

Virtual reality, Voice control, Sensorimotor interfaces, Touch interfaces, Cyberbullying, User engagement, Graphical interfaces, Muscle fatigue, User motor skills, Interface design

This article is automatically translated.

 

introduction

Virtual reality (VR) is very often defined as a set of technologies that allow people to immerse themselves in a world beyond reality [1] and is often associated with immersive technologies that are intended for the most complete immersion in an artificial world created by a computer by delivering information to the senses, mainly sight, hearing and touch [2].

To achieve the necessary visual effect in VR, numerous variants of head-mounted displays [3, 4, 5] (English, Head-Mounted Display, or HMD), single (Powerwall VR) or combined projection screens (CAVE) are very often used [6]. At the same time, the sound effect can be supported using headphones, speakers or a full-fledged sound system, and interaction with the virtual environment is carried out using special sensors and tracking systems (optical, magnetic, ultrasonic, inertial, etc.), which allow you to calculate the position and orientation of physical objects in space in real time.

Currently, VR-related technologies are being used in many industries, including education [7], medicine [8], industry [9], the army [10, 11], the games and entertainment industry, allowing users to interact with the virtual world and process data obtained from it. Data processing is performed in various ways applicable to solving and understanding complex problems. Thus, BP increases the level of productivity through technological modernization and innovation, being a key element in the development of many industries.

When a user interacts with a virtual reality system, the organization of the interface is very important. This problem is very relevant in the design of VR systems and is discussed further.

 

 

1 Features of the organization of virtual reality interfaces

1.1 Categories of user activities in a virtual environment

Since virtual reality technology involves partial or complete immersion of the user in a virtual environment, there are many features of the organization of this type of human-machine interaction, which follow from the capabilities of the human senses.

This is because when a user is in a virtual environment, he has to interact with it. In such an interaction, elementary, sensorimotor and cognitive activities are distinguished. In general, user actions when interacting with the virtual environment can be grouped into four categories [12, p. 26]:

· observation of the virtual world;

· moving in the virtual world;

· impact on the virtual world;

· communication with other users, the virtual world or with the application.

1.2 Field of view as an important indicator of user engagement

One of the most important aspects of virtual reality is the field of view, which depends on the horizontal and vertical size of the display. User engagement largely depends on this indicator: the wider the field of view, the larger it is [13].

The field of view is of two types: monocular and binocular. Monocular is the field of view of one eye, located within 170 ° - 175 °, which consists of an angle from the pupil to the nose (60 ° - 65 °) and an angle from the pupil to the temple (100° - 110 °). The binocular field of vision is a combination of two monocular fields of vision, and provides a person with a visible area of about 200° - 220° [14].

At the intersection of two monocular fields of vision, a stereoscopic binocular field of vision [15] (about 100 °) appears, at which a person can perceive objects in 3D. This field of view is the most important, since it is within its limits that a person sees most of the objects of the natural and virtual environment.

Thus, when designing virtual reality interfaces, it is necessary to take into account the field of view so that the elements of the graphical interface are always visible to the user. Theoretically, a virtual reality device should maintain a field of view of 170° horizontally and 145° vertically to ensure complete immersion in the virtual environment, but in practice, the field of view in many modern HMD displays does not exceed 120° horizontally.

When designing virtual reality interfaces, it is also necessary to take into account that the total field of view when moving the head and eyes is very wide: horizontally it is more than 200° from the side of the temple and about 130° from the side of the nose, as well as 140° up and 170° down vertically [12, p. 68]. Thus, the main content should be located in the optimal zone (the distance between the object and the user is from 1.25 m to 5 m), taking into account the user's position (sitting, reclining, standing or walking) [16].

1.3 Methods and scenarios of user interaction with virtual reality

The user's interaction with the virtual reality interface occurs mainly with the help of various control panels, controllers [17], special tactile gloves, as well as with gestures [18]. The use of such controls in three-dimensional space is carried out thanks to special tracking systems [19].

Among the main possible scenarios of user interaction with the VR interface, the following can be distinguished [20]:

· object selection (the object must be selected before the actual action can be performed with it);

· manipulations with the selected object, i.e., the use of functions that are available after its selection;

· placement and movement of objects, i.e., their free positioning anywhere in the horizontal plane and rotation around the vertical axis;

· creation or modification of objects, i.e., the use of functions that allow you to choose between predefined parameters, among which may be, for example, the type of object being created, size, weight, color, etc.;

· data entry, i.e., text input, selection of selected objects in virtual space, etc.

1.4 Voice control as an alternative to manual

When designing the interface of a VR system, it should be borne in mind that normal interaction with it may be difficult in cases where the user is already working with a virtual environment. For example, a scenario is possible when a user of a virtual reality training application holds a tool in his hands, studying its capabilities and ways of application. In this case, it may be inconvenient or even impossible for the user to call for help on this tool, because his hands are already busy.

In such applications, it is necessary to provide support for voice control, provided with the help of special built-in microphones. Another possible way to interact with the interface of a virtual reality application, which can be used as a substitute for manual control, is gesture recognition. In a particular case, the virtual reality interface implementing the gesture recognition technique can be focused on tracking the movement of the eyes or head [21].

1.5 Prevention of muscle fatigue

When developing a VR interface based on gesture recognition, it is very important to avoid muscle fatigue of the user. Such a phenomenon can manifest itself, for example, when holding a hand or head in a certain position for a long period, or repeating gestures for a long period of time when interacting with a virtual reality interface.

In order to avoid muscle fatigue of the user, support for combinations of gestures and voice input should be introduced. In addition, it is very important to correctly position the interactive elements, placing them at the right height and in the right place for the user, ensuring convenient interaction with the virtual reality system [16].

1.6 Safety of using virtual reality

It is also necessary to think over the VR interface from the point of view of safety of use, to avoid potentially dangerous movements and strategies of user behavior when interacting with the interface, because people very often lose touch with the real world while in virtual reality [22].

If necessary, the interface should be changed so that the user's dangerous movements are replaced by gesture recognition or voice control.

1.7 Virtual Environment Navigation

Another problem of building virtual reality interfaces is the possibility of convenient movement in it and interaction with objects located at a distance. According to research, navigation in a virtual environment affects the sense of presence [23], but this functionality is often limited by the capabilities of the VR system itself and the headset used by the user.

One possible solution is to use standard controls, such as a joystick, keyboard or mouse, as well as adapting such devices to a virtual environment.

1.8 Cyberbullying and ways to prevent it

An essential feature that should not be neglected when designing VR interfaces is cybersickness [24] (Eng., cybersickness). Symptoms of cyberbullying are: nausea, headache, pallor, dry mouth, disorientation, vomiting [25]. Cyberbullying occurs when a user visually perceives that he is moving in a virtual environment despite the fact that he remains physically motionless.

Therefore, using a standard control device, such as a mouse or keyboard, can lead to cyber-pain, causing a conflict in the sensor system. In such cases, moving in a virtual environment at a constant speed in the direction of the user's gaze is used.

Another possible way to move a user in virtual reality is teleportation. Using this method, the user can freely move from one place to another, avoiding the occurrence of cyber-pain. The main points or locations can be predefined in the application, and can be selected in various ways, including standard control devices such as a joystick or mouse [16].

 

 

2 Classification of virtual reality interfaces

There are many works devoted to the classification of virtual reality interfaces. Many types of VR interfaces are discussed in [12], and a thorough study on this topic was conducted in [26].

According to the classification of interfaces described in [26], there are four main groups of interfaces described below.

1. Interfaces for modeling and development:

1.1. based on the digitization of real objects;

1.2. based on special software for object modeling, such as computer-aided design (CAD) systems, specialized programming languages;

1.3. based on virtual constructors of object forms, with the help of which it is possible to create objects of any shape.

2. Touch interfaces:

2.1. graphical, i.e., stereoscopic and monoscopic graphical interfaces, in which virtual reality glasses and helmets, various monitors, projection displays are used to interact with graphics;

2.2. voice, i.e., based on speech and sound recognition;

2.3. touch interfaces (English, touch interfaces), i.e., built on the basis of user touches to interface elements, in which various modifications of virtual reality gloves, joysticks, etc. are used for user interaction with the system.;

2.4. interfaces based on the sense of smell.

3. Interfaces based on the user's motor skills:

3.1. based on the determination of the user's location and orientation, in which special sensors are used;

3.2. based on finger movement detection technology, in which virtual reality gloves are actively used;

3.3. based on the user's walking analysis technology;

3.4. based on user motion capture (Eng., motion capture interfaces), using virtual reality suits, special sets of sensors, image analysis systems;

3.5. command interfaces in which the following types of control are carried out: voice, manual (using a computer mouse, joystick, stylus), using feet (pedal control);

3.6. based on the user's movement, which are based on the use of roller skates, mobile platforms, gyroscopes;

3.7. based on face capture technology, with tracking facial expressions, eye and lip movements.

4. Sensorimotor interfaces, which are command interfaces with feedback, in which various kinds of manipulators, joysticks, virtual reality gloves, exoskeletons are used for control.

          As can be seen from the classification presented above, virtual reality interfaces have a wide variety and application. In addition, it should be noted that often the types of interfaces mentioned above are combined, replacing or complementing each other. Next, let's look at some of the main types of VR interfaces in more detail.

 

 

3 Interfaces for virtual reality modeling and development

          Interfaces for VR modeling and development represent the VR development environment, i.e. how the developer sees virtual reality and interacts with it.

          One of the easiest ways to transfer real–world objects to a virtual environment is to digitize them using special 3D digitizers (Eng., 3D digitizer). The result is a three-dimensional model that can be interacted with at the software level. Such solutions find their application, for example, in the design of interior items or tableware, on digitized models of which various inscriptions can be applied using special tactile devices. An example of such a solution [27] is shown in Figure 1.

Figure 1 – Tactile device and its application in virtual reality

 

          Figure 1 shows the tactile device "Phantom Desktop" (left) and its application for applying Chinese characters to a digitized bowl (right).

          When modeling virtual objects using a virtual environment, the creation of a form is achieved using virtual processing of matter, the shape, texture and density of which can be physically felt. This method allows sculptors, designers or computer graphics specialists to create complex forms of virtual objects without using special development tools.

          Specialized CAD systems are also very often used to create three-dimensional models [28] and various programming languages, such as C/C++ and Pascal [26].

 

 

4 Virtual Reality Touch interfaces

          Sensory interfaces are used in virtual reality so that the user has the opportunity to feel the objects of the virtual world, thereby enhancing the effect of immersion in the virtual environment.

          4.1 Graphical interfaces

          The graphical interfaces of VR can be 2D and 3D. The 2D interface is usually a set of buttons on a two-dimensional surface. An example of such an interface can be, for example, the settings menu. It is worth noting that a two-dimensional interface in this case can be integrated into a three-dimensional virtual environment. When using 2D interfaces, they usually use the selection of selected objects, and also display a special pointer (for example, by analogy with a mouse pointer).

          The 3D interface in VR is used in order to bring the possibilities of interaction as close to real life as possible in the context of the virtual environment itself. The use of a 3D interface is effective when it is necessary to simulate physical interaction in reality. An example of 2D and 3D interfaces in virtual reality [29] is shown in Figure 2.

Figure 2 – Example of 2D and 3D interfaces in virtual reality

Figure 2 shows a three-dimensional interface (a) for selecting a tool from a desktop in an enterprise using a special controller. After selecting this tool, the settings menu (b) opens, implemented in 2D. This example shows how 2D and 3D interfaces can be combined in virtual reality.  

The organization of a 3D interface in virtual reality is a very urgent problem and is being actively investigated at the moment [30-32].

4.2 Voice interfaces

Voice interfaces are quite often used in virtual reality [20, 33, 34].  With the help of voice control, the user can control his position in the virtual environment, zoom in or rotate an object, change it, etc. Voice control can be used both as the main, and as an additional or alternative.

In fact, interactive speech systems of virtual reality can be considered as instances of an indirect control interface, i.e. an interface in which the user delegates a task to a computer, which, in turn, initiates (and controls) actions in the system and determines the order of solving the problem [35].

Interactive virtual reality applications are distinguished by the complexity of the subject area and the corresponding increase in the complexity of the language used. VR systems using voice interfaces should minimize both user errors and command recognition errors by asking questions to obtain information not provided by the user, but necessary to complete the task and eliminate further problems in the human-machine dialogue.

The main current problems in the development of virtual reality interfaces with voice control are: speech recognition, understanding of control commands, as well as the organization of human interaction with the virtual environment as a whole.

So, when implementing speech recognition, it should be borne in mind that the larger the user's vocabulary, the greater the probability of recognition errors, i.e., the problem lies in the need to limit the user's language while maintaining comfortable interaction with the virtual reality system.

Problems with understanding control commands are primarily related to understanding the context on the basis of which they are interpreted, i.e., with the computer's understanding of the user's spoken speech.

The organization of human interaction with virtual reality through the voice control interface is often reduced to the implementation of the VR system as an information agent with which the user interacts. The main problem in this case is that the user understands what he is interacting with and that this interaction would look like a dialogue between a person and the VR system, most similar to natural human communication.

4.3 Touch interfaces

Touch interfaces [36, 37], for interaction with which various modifications of virtual reality gloves and joysticks are used, are actively used in medicine, robotics and many other areas in which touch is necessary for a better understanding of the virtual world.

Touch is central to human perception and interaction with the environment, playing a vital role in the formation and maintenance of close social ties and well-being [38].

This type of interfaces is based on the fact that a huge number of different kinds of receptors are placed on the human skin [12, pp. 70-75], which increases the user's immersion in the virtual environment and improves management.

            

            5 Interfaces based on the user's motor skills

The role of virtual reality interfaces based on the motor skills of users is to provide the user with the opportunity to influence the objects of the virtual world by transmitting information to the computer about the gestures and speech of the user concerning the objects of the virtual world [26].

5.1 User location and orientation-based interfaces

The principle of determining the exact position of HMD displays, controls and the user's body in Euclidean space is used in virtual reality interfaces based on determining the user's location and orientation [39]. Since the purpose of such interfaces is to simulate the perception of reality, it is extremely important that positional tracking is accurate and accurate, in order to avoid violating the illusion of being in a three-dimensional space.

To do this, sensors are used that repeatedly register signals from transmitters on or near the monitored object(s), and then send this data to a computer to maintain their approximate physical location [40].

5.2 Interfaces based on user motion capture technology

Virtual reality systems with interfaces based on user motion capture technology are often used in medicine to simulate operations, as well as in the automotive industry (mainly for virtual assembly), in the film industry (to capture human movement and create animation with captured movements), in the gaming industry and many other areas [41].

There are several possible approaches to the implementation of a virtual reality system based on motion capture technology: optical, mechanical, magnetic and inertial. The most commonly used approach is to use optical motion capture technologies, as shown in Figure 3 [42].

 

Figure 3 – Example of a virtual reality system based on optical motion capture technology

The VR system shown in Figure 3 is arranged as follows. Special markers are placed at certain points on the user's body, and video cameras in fixed positions track their movement. The collected data is subsequently analyzed by image processing software.

The approach based on optical motion capture technology makes it possible to obtain very accurate data on movement inside a specially designed room, provided that a large number of surveillance cameras are used. This approach, which has proven itself well when used in specially prepared rooms, is not applicable in open spaces. Recently, relatively inexpensive optical motion tracking technologies have been investigated, which are attractive because they do not require markers or special settings.

5.3 Interfaces based on face capture technology

Virtual reality interfaces based on face capture technology allow the virtual reality system to collect the user's facial movements and determine his mood, such as joy, sadness, fear, anger, surprise, dislike, reverie, distrust, concern, withdrawal, suspicion, etc., with its subsequent transfer to the virtual environment. Interfaces of this type can be divided into three categories: interfaces for analyzing facial movements, eye movements, lip movements [43-45].

An example of such an interface is shown in Figure 4.

 

Figure 4 – Determination of user emotions based on face capture technology

         Figure 4 illustrates a virtual reality interface based on face capture technology: a user wearing an HMD display, which is used to track facial expressions (A); the inside of the HMD display, with infrared LEDs visible along the radius of the eyepieces, highlighted with red circles (B); the image of the user's eyes (C) and the output a dynamically generated avatar (D) that displays the user's mood.

 

6 Sensorimotor interfaces

The role of sensorimotor interfaces of virtual reality is to transmit the user's movements to the virtual environment and receive a tangible response by the user, which can be represented, in addition to the image, in the form of various vibrations, shocks and sounds.

Interfaces of this kind have some similarities with motion simulators that transmit changes in orientation and acceleration to the user. They are used when it is necessary to create the illusion of materialization of objects present in a virtual environment: a response is given to a part of the body in contact with a virtual object (for example, a push or vibration), which the user might have felt when interacting with a similar object in the real world. This effect is used to improve the user's immersion in the virtual environment.

 Sensorimotor interfaces usually include controls such as manipulators, joysticks, virtual reality gloves, exoskeletons, etc. The main problem of building such interfaces is that they must be built on a solid frame, which usually causes the user some inconvenience by restraining his movements [26].

An example of such an interface is shown in Figure 5 [46].

Èçîáðàæåíèå âûãëÿäèò êàê òåêñò  Àâòîìàòè÷åñêè ñîçäàííîå îïèñàíèå

Figure 5 – Sensorimotor interface of virtual reality "VibroTac"

         The sensorimotor virtual reality interface "VibroTac" presented in Figure 5 [46] includes two manipulators with which the user can carry out virtual assembly and maintenance check. Thanks to such a virtual reality interface, engineers can assess the maintainability or the possibility of mounting mechanical parts in complex products such as cars or airplanes.

         "VibroTac" notifies the user about the contact of his hand and objects in virtual reality: upon contact, the user is notified by vibration of the manipulator and a specific sound similar to a knock. Moreover, the deeper the user's hand penetrates into the virtual object, the stronger the user feels the vibration.

 

conclusion

Virtual reality technologies are among the most promising at the moment and are used in many areas of industry, often in the form of various simulators and training applications, as well as in the army, medicine and entertainment industry.

In this paper, the main types of virtual reality interfaces were considered, including sensory, sensorimotor, touch interfaces and interfaces based on the user's motor skills.

The paper describes the features of the organization of virtual reality interfaces, while it was noted that in order to create high-quality, well-designed and user-friendly interfaces, it is necessary to take into account many user characteristics, including cognitive features, sense of smell and touch, anthropometric features.

The virtual reality interface should be ergonomic, the user should not experience muscle fatigue. To do this, it is possible, and in practice, a combination of different types of interfaces considered is often used, which provides the user with alternative management, improved navigation, and also improves the quality of use of the application as a whole.

In addition, the paper concludes that it is necessary to design virtual reality interfaces taking into account user safety. The interface and user behavior in the virtual environment should not lead to injuries, falls, etc.

The paper also considers such a phenomenon as cyberbullying, which manifests itself in the form of nausea and headache and is associated with the fact that the user has a feeling of movement at a time when his body remains motionless. The interface of the VR application should provide smooth navigation through the virtual environment in order to avoid such phenomena.  

References
1. Berg, L.P., & Vance, J.M. (2017). Industry use of virtual reality in product design and manufacturing: a survey. Virtual Reality 21, 1–17. doi:10.1007/s10055-016-0293-9
2. Furht, B. (2008). Immersive Virtual Reality. Encyclopedia of Multimedia. Springer, Boston, MA. doi:10.1007/978-0-387-78414-4_85
3. Yan, F., Dorine, C., Duives, S.P.H. (2022). Wayfinding behaviour in a multi-level building: A comparative study of HMD VR and Desktop VR. Advanced Engineering Informatics. doi:10.1016/j.aei.2021.101475
4. Tao, G., Garrett, B., Taverner, T. et al. (2021). Immersive virtual reality health games: a narrative review of game design. J NeuroEngineering Rehabil 18, 31. doi:10.1186/s12984-020-00801-3
5. Valentine, A., Van Der Veen, T., Kenworthy, P., Hassan, G. M., Guzzomi, A. L., Khan, R. N., & Male, S. A. (2021). Using head mounted display virtual reality simulations in large engineering classes: Operating vs observing. Australasian Journal of Educational Technology, 37(3), 119–136. doi:10.14742/ajet.5487
6. Cruz-Neira, C., Sandin, D., DeFanti, T.A., Kenyon, R.V., Hart, J.C. (1992). The CAVE: audio visual experience automatic virtual environment. Communications of the ACM. doi:10.1145/129888.129892
7. Bower, M., & Jong, M.S.-Y. (2020). Immersive virtual reality in education. Br. J. Educ. Technol., 51: 1981-1990. doi:10.1111/bjet.13038
8. Pensieri, C., Pennacchini M. (2014). Overview: Virtual Reality in Medicine. Journal For Virtual Worlds Research. doi:10.4101/jvwr.v7i1.6364
9. Klačková, I. et al. (2021). Virtual reality in Industry. IOP Conference Series: Materials Science and Engineering. doi:10.1088/1757-899X/1199/1/012005
10. Lele, A. (2011). Virtual reality and its military utility. Journal of Ambient Intelligence and Humanized Computing. doi:10.1007/s12652-011-0052-4
11. Liu, X., Zhang, J., Hou, G., Wang, Z. (2018). Virtual Reality and Its Application in Military. IOP Conference Series: Earth and Environmental Science. doi:10.1088/1755-1315/170/3/032155
12. Fuchs, P., Moreau, G., Guitton, P. (2011). Virtual Reality: Concepts and Technologies. CRC Press.
13. Rongkai, S. et al. (2021). Virtual Reality Sickness Mitigation Methods: A Comparative Study in a Racing Game. In Proceedings of the ACM on Computer Graphics and Interactive Techniques. doi:10.1145/3451255
14. Nelson-Quigg, J.M., Cello, K., Johnson, C. (2000). Predicting Binocular Visual Field Sensitivity from Monocular Visual Field Results. Investigative ophthalmology & visual science. Retrieved June 7, 2022, from https://www.researchgate.net/publication/12426081
15. Shojiro Nagata. (1996). The binocular fusion of human vision on stereoscopic displays-Field of view and environment effects. Ergonomics. doi:10.1080/00140139608964547
16. Kamińska, D., Zwoliński, G., Laska-Leśniewicz, A. (2022). Usability Testing of Virtual Reality Applications—The Pilot Study. Sensors. doi:10.3390/s22041342
17. Lee, J., Sinclair, M., Gonzalez-Franco, M., Ofek, E., Holz, C. (2019). TORC: A virtual reality controller for in-hand high-dexterity finger interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. doi:10.1145/3290605.3300301
18. Yang, L., Huang, J., Feng, T., Hong-An, W., Guo-Zhong, D. (2019). Gesture interaction in virtual reality. Virtual Reality & Intelligent Hardware. doi:10.3724/SP.J.2096-5796.2018.0006
19. Cortes, G., Marchand, E., Ardouinz, J., Lécuyer, A. (2017). Increasing optical tracking workspace of VR applications using controlled cameras. 2017 IEEE Symposium on 3D User Interfaces (3DUI). doi:10.1109/3DUI.2017.7893313
20. Weiss, Y., Hepperle, D., Sieß, A., Wölfel, M. (2018). What User Interface to Use for Virtual Reality? 2D, 3D or Speech-A User Study. 2018 International Conference on Cyberworlds (CW). doi:10.1109/CW.2018.00021
21. Monteiro, P., Gonçalves, G., Coelho, H., Melo, M., Bessa, M. (2021). Hands-free interaction in immersive virtual reality: A systematic review. IEEE Transactions on Visualization and Computer Graphics. doi:10.1109/TVCG.2021.3067687
22. Tuena, C. et al. (2020). Usability issues of clinical and research applications of virtual reality in older people: A systematic review. Frontiers in Human Neuroscience. doi:10.3389/fnhum.2020.00093
23. Brivio, E., Serino, S., Negro, C.E. (2021). Virtual reality and 360° panorama technology: a media comparison to study changes in sense of presence, anxiety, and positive emotions. Virtual Reality. doi:10.1007/s10055-020-00453-7
24. Farmani, Y., & Teather, R. (2018). Viewpoint Snapping to Reduce Cybersickness in Virtual Reality. In Proceedings of Graphics Interface 2018. doi:10.20380/GI2018.21
25. Joseph J. LaViola. (2000). A discussion of cybersickness in virtual environments. SIGCHI Bull. doi:10.1145/333329.333344
26. Thériault, Lévis, Robert, Jean-Marc, Baron, Luc. (2004). Virtual Reality Interfaces for Virtual Environments. Virtual Reality International Conference. Retrieved June 7, 2022, from https://www.researchgate.net/publication/259576863
27. Lei, Huang, & Zengxuan, Hou. (2020). A Novel Virtual 3D Brush Model Based on Variable Stiffness and Haptic Feedback. Mathematical Problems in Engineering. doi:10.1155/2020/6942947
28. Okuya, Y. et al. (2018). ShapeGuide: Shape-Based 3D Interaction for Parameter Modification of Native CAD Data. Frontiers in Robotics and AI. doi:10.3389/frobt.2018.00118
29. Liang, Gong et al. (2020). Interaction design for multi-user virtual reality systems: An automotive case study. Procedia CIRP. 2020. doi:10.1016/j.procir.2020.04.036
30. Dachselt, R., Hübner, A. (2006). A Survey and Taxonomy of 3D Menu Techniques. 12th Eurographics Symposium on Virtual Environments. doi:10.2312/EGVE/EGVE06/089-099
31. Choi, K-S., & Schmutz, B. (2020). Usability evaluation of 3D user interface for virtual planning of bone fixation plate placement. Informatics in Medicine Unlocked. doi:10.1016/j.imu.2020.100348
32. Pengyu, Shan Wan, Sun. (2021). Research on landscape design system based on 3D virtual reality and image processing technology. Ecological Informatics. doi:10.1016/j.ecoinf.2021.101287
33. Carlos, Alexandre F. Jorge et al. (2010). Human-system interface based on speech recognition: application to a virtual nuclear power plant control desk. Progress in Nuclear Energy. doi:10.1016/j.pnucene.2009.08.003
34. Alex, W. Stedmon et al. (2011). Developing speech input for virtual reality applications: A reality based interaction approach. International Journal of Human-Computer Studies. doi:10.1016/j.ijhcs.2010.09.002
35. McGlashan, S., & Axing, T. (1996). A Speech Interface to Virtual Environments. Computer Science. Retrieved June 7, 2022, from https://www.researchgate.net/publication/2575618
36. Gallotti, P., Raposo, A., Soares, L. (2011). v-Glove: A 3D Virtual Touch Interface. 2011 XIII Symposium on Virtual Reality. doi:10.1109/SVR.2011.21
37. Vera, J.E., Bayona, J.F., Torres, A. (2013). Touch interface analysis for virtual reality. TECCIENCIA. doi:10.18180/tecciencia.2013.14.8
38. Bull, M., Gilroy, P., Howes, D., Kahn, D. (2006). Introducing Sensory Studies. The Senses and Society. doi:10.2752/174589206778055655
39. Lee, J., Ahn, S.C., Hwang, J-I. (2018). A Walking-in-Place Method for Virtual Reality Using Position and Orientation Tracking. Sensors. doi:10.3390/s18092832
40. Stone, R.J. (1996). Position and orientation sensing in virtual environments. Sensor Review. doi:10.1108/02602289610108410
41. Zaldivar, U., Zaldivar, X., Marmolejo-Rivas, C., Murillo-Campos, D., León-Sánchez, C., Bernal-Guadiana, R., Martinez-Tirado, C. (2011). Optical-Mechanical Motion Capture System for Virtual Reality Applications. ASME 2011 World Conference on Innovative Virtual Reality. doi:10.1115/WINVR2011-5544
42. Park, K. (2013). A Ubiquitous Motion Tracking System Using Sensors in a Personal Health Device. International Journal of Distributed Sensor Networks. doi:10.1155/2013/298209
43. Li, B., Fu, H., Wen, D., Lo, W. (2018). Etracker: A Mobile Gaze-Tracking System with Near-Eye Display Based on a Combined Gaze-Tracking Algorithm. Sensors. doi:10.3390/s18051626
44. Alsaeedi, N., & Wloka, D. (2019). Real-Time Eyeblink Detector and Eye State Classifier for Virtual Reality (VR) Headsets (Head-Mounted Displays, HMDs). Sensors. doi:10.3390/s19051121
45. Hickson, S., Dufour, N., Sud, A., Kwatra, V., Essa, I. (2019). Eyemotion: Classifying Facial Expressions in VR Using Eye-Tracking Cameras. 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). doi:10.1109/WACV.2019.00178
46. Schätzle, S., Ende, T., Wüsthoff, T., Preusche, C. (2010). VibroTac: An ergonomic and versatile usable vibrotactile feedback device. 19th International Symposium in Robot and Human Interactive Communication. doi:10.1109/ROMAN.2010.5598694

Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

The reviewed article is devoted to the current direction of the development of virtual reality interfaces. The types of user interaction with the environment and engagement indicators are considered. The authors pay attention to the design features of virtual reality interfaces, interaction scenarios, and the need to plan alternative system management channels, hardware and software, and possibilities to facilitate user management of the system. The relevance of the work lies in the widespread use of virtual interfaces in various fields of science. The undoubted advantage of the article is the consideration of specific examples of interfaces, the analysis of their elements. Considering voice interfaces, the authors address the problem of its dependence on the user's vocabulary and the need to find a balance between the number of possible recognizable options and system errors. Much attention is paid to interfaces that use user motor skills, systems for various purposes are listed, but specific examples are not provided. It is difficult to determine the scientific novelty, since the authors did not carry out their own research, the article is of an overview nature. The style of presentation. The article uses professional terminology correctly, the wording is typical for review articles, and there are sufficient illustrations. There are no diagrams or calculated values. The structure of the article as a whole meets the requirements for scientific publication, there are no own measurements, the work is of an overview nature. The bibliography contains 46 sources, including in peer-reviewed publications, of which 50% have been in the last 5 years. Remarks. The article contains an overview of a large number of components of the design of virtual reality systems, so each of them is given very brief attention. The review of system requirements does not include an analysis of specific systems or examples. The cyber-pain section seems superfluous in the context of the objectives of the article, but its content can be combined with security issues. The authors pay great attention to the classification of interfaces, however, this section is not its own analysis, but contains a link to publication 26, from which the mentioned information is formulated. The article contains a link to the work, however, in order to assess the contribution of the authors, it is necessary to supplement the classification with their own analysis. When borrowing drawings from the analyzed publications, it is necessary to put a link not only in the body of the article, but also in the caption (Fig.1, 2). It is advisable to give examples of errors in voice interfaces caused by a wide user vocabulary. Touch interfaces are mentioned very briefly, actually limiting themselves to general concepts and without mentioning the technical features of the implementation. Fig. 3, 4 the caption does not specify a specific system. In the description in the text, it is necessary to provide more specific data for Fig.3. The authors are recommended to work on the formulations, making them more succinct and removing introductory and general words. It is recommended to change the wording of the conclusion regarding cyberbullying, shifting the emphasis to the limitations of the method. The article is of interest to specialists in the field of developing virtual reality applications and human-machine interaction interfaces, as well as specialists in the use of specialized systems. The article needs to be finalized, after which it can be published in the journal "Software Systems and Computational Methods".