Digital Transformations in the Challenge of Activity and Work. Группа авторов

Чтение книги онлайн.

Читать онлайн книгу Digital Transformations in the Challenge of Activity and Work - Группа авторов страница 18

Digital Transformations in the Challenge of Activity and Work - Группа авторов

Скачать книгу

of some aspect of the real world” (p. 5).

      User activity is described as sensory-motor and cognitive activity. We speak of sensory-motor activity because, on one hand, the user, in his/her interaction with virtual reality, perceives the virtual world and virtual entities through different senses (sight, hearing, proprioception, etc.) and, on the other hand, he/she acts physically within the world in which he/she is immersed. His/her activity is also cognitive, since he/she processes the information he/she perceives, memorizes it, makes decisions and undertakes actions. Note that there are perception–cognition feedback loops: a stimulus given by the computer part of virtual reality can lead to a motor action by the user, which in turn leads to new sensory stimuli that modify the user’s intentions.

      3.2.3. A functional definition of virtual reality

      Another definition of virtual reality includes its potential functionalities. It allows us to: “extract oneself from physical reality to change virtually the time, place and/or type of interaction: interaction with an environment simulating reality or interaction with an imaginary or symbolic world” (Fuchs et al. 2006, p. 7).

      Indeed, the use of virtual reality makes it possible to bypass the physical laws of our world. In virtual reality, the functioning of time is different; it is possible, for example, to stop time when you perform an action, or to go backwards in time. It also makes it possible to change place. While manipulating visualization and controlling devices in the physical world (imagine a user in an immersive room manipulating an arm with force feedback), he/she has the feeling of being in the virtual environment in which he/she is immersed (e.g. in an operating theater). Interaction with virtual reality is also different from interaction with the physical world, since it responds to the rules and workings of the application. For example, movements can be made by holding the button on a joystick rather than actually moving in the physical world.

      3.2.4. A technical definition of virtual reality

      We will end with a technical definition of virtual reality. It is: “a scientific and technical field exploiting computing (1) and behavioral interfaces (2) in order to simulate in a virtual world, the behavior of 3D entities, which interact in real time with each other and with one or more users, in pseudo-natural immersion (3) via sensor-motor channels” (Fuchs et al. 2006, p. 8).

      1 1) The term “computing” refers to all hardware and software parts of the system.

      2 2) Behavioral interfaces are of three types:– sensory interfaces, which provide information to the user about changes in the virtual environment. For example, a change in color may inform a user simulating an assembly operation, that the tool he/she is using has collided with another object;– motor interfaces, which inform the system of the user’s motor actions. For example, the system can exploit data on the user’s position in an immersive room;– sensory-motor interfaces, which provide information to both the computer system and the user. For example, a force feedback arm informs the system about the user’s gesture, and at the same time forces the user to make efforts to simulate the resistance of the part being pierced.

      3 3) Finally, we speak of “pseudo-natural” immersion, because the user does not act the same way in the virtual environment as he/she would naturally act in the physical world. There are sensor-motor biases in the interaction with virtual reality: for example, instead of walking from one point to another in the virtual environment, the user can teleport. Furthermore, the virtual environment does not necessarily provide all the sensory stimuli of the physical world.

      It is also important to note that the definition of virtual reality is based on two conditions: the user must be immersed in a virtual environment, and he/she must also interact within this environment in real time. Therefore, 360° videos or 3D cinema are excluded from this definition, since the user is immersed in a world without really being able to interact with it.

      Virtual reality systems are computer systems that include various peripherals. These are referred to as devices and can be classified in four categories (Burkhardt 2003):

       – display devices;

       – motion and position capture devices;

       – proprioceptive and cutaneous feedback devices;

       – sound input and presentation devices.

      Each of these device categories will be discussed in the following sections.

      Visual presentation devices are the most common. It is rare to find systems that do not mobilize vision, although we could cite systems that were developed for visually impaired people that can incorporate spatialized sounds.

      Virtual reality systems can be classified into three categories based on the degree of immersion of their visual presentation devices (Mujber et al. 2004). Non-immersive (or deskop-VR) systems are systems consisting of conventional computer monitors, which do not use specific hardware. Semi-immersive systems refer to widescreen displays, wall projection systems, interactive tables and head-mounted displays without stereoscopic vision. Stereoscopic vision is when the user has 3D vision of the virtual environment because a different image is displayed for each eye. Finally, fully immersive systems include head-mounted displays with stereoscopic vision and CAVE-type immersive rooms. In the latter, the virtual environment is viewed by users in cubic rooms, where three to six sides are screens.

      In summary, non-immersive systems do not include specific technologies and do not offer stereoscopic vision, whereas semi-immersive and immersive systems use to specific technologies and can offer users stereoscopic vision.

      3.3.2. Motion and position capture devices

      Motion and position capture devices are also used. They provide the system with real-time information on the user’s actions and position, as well as their evolution. Sensors can locate the user’s entire body, a part of their body or an object held by the user (Fuchs and Mathieu 2011). Different types of sensors exist: mechanical sensors, electromagnetic sensors and optical sensors.

      These sensors allow the virtual environment to react to the user’s actions. For example, the system can detect

Скачать книгу