The Handbook of Multimodal-Multisensor Interfaces, Volume 1. Sharon Oviatt

Чтение книги онлайн.

Читать онлайн книгу The Handbook of Multimodal-Multisensor Interfaces, Volume 1 - Sharon Oviatt страница 26

Автор:
Жанр:
Серия:
Издательство:
The Handbook of Multimodal-Multisensor Interfaces, Volume 1 - Sharon Oviatt ACM Books

Скачать книгу

we can point to its original location with great accuracy. If we simply imagine moving to the new location [Rieser et al. 1986] or see the visual information that would occur if we were to move, but without actually moving we cannot accurately locate the target object [Klatzky et al. 1998]. Even if an individual is moved in a wheelchair by another person [Simons and Wang 1998]or in a virtual-reality environment, object localization is worse than if the movement were self-generated [Christou and Bülthoff 1999].

       2.3.3 Three-dimensional Object Structure

      Learning object structure is facilitated by active interactions in numerous ways. For instance, slant perception [Ernst and Banks 2002] and shape from shading cues for object structure [Adams et al. 2004] are both cues for depth perception. Both of these perceptual competencies are facilitated by manual active interaction with the stimuli. Remembering object structure is also facilitated by active manipulation of objects with one’s hands. In a series of studies, novel objects were studied through either an active interaction, where participants rotated the 3D images on a computer screen, or through a passive interaction, where participants observed object rotations that had been generated from another participant. By pairing active and passive learning between subjects, one could compare the learning of a given object that resulted from either visual and haptic sensation with the involvement of the motor system (multimodal) or learning through visual sensation alone (unisensory). Subsequently, participants were tested on object recognition and object matching (see Figure 2.2). Results were straightforward: when objects were learned through active interactions, recognition was enhanced relative to learning through passive interactions [Harman et al. 1999, James et al. 2001, 2002]. These results demonstrate that understanding object structure is facilitated by multimodal exploration relative to unimodal exploration. We hypothesized from these results that active interactions allow the observer to control what parts of the object they saw and in what sequence. This control allows an observer to “hypothesis test” regarding object structure and may be guided by preferences for certain view-points that the observer’s perceptual system has learned to be informative.

      Understanding object structure requires that we have some notion that an object looks different from different viewing perspectives and that does not change its identity (as in Figure 2.2 left, upper-right quadrant presents the same object from different viewpoints, lower-right presents different objects from different view points). This form of object constancy is referred to as viewpoint-independent object recognition. Many researchers and theorists agree that one way that we achieve viewpoint-independent recognition is through mental rotation—the rotation of an incoming image to match a stored representation. A considerable amount of research has shown that the ability to mentally rotate an object image is related to manual rotation in important ways. Manual rotation is a specific form of active interaction that involves the volitional rotation of objects with the hands. That is, mental rotation is facilitated more by manual rotation practice compared to mental rotation practice, and further, that mental and manual rotations are reliant on common mechanisms [Wohlschläger and Wohlschläger 1998, Wexler 1997, Adams et al. 2011]. Thus, multimodal experience contributes to the development of mental rotation ability, a basic process in spatial thinking, which leads to an increase in the ability of an observer to understand its environment.

      Figure 2.2 Examples of novel object decision: same or different object? (From James et al. [2001])

      Thus far, we have reviewed how physical interactions with the world affect our learning and understanding of existing objects. Symbols are objects that have several qualities that make learning and understanding different from 3D objects. When trying to understand symbols and symbolic relations, we must take something arbitrary and relate it to real world entities or relationships. Evidence suggests that symbol understanding is facilitated when symbols are actively produced. Producing symbols by hand, using multiple modalities during encoding (seeing them and producing them manually) facilitates learning symbol structure and meaning more than visual inspection alone. Lakoff and Nunez [2000], p. 49 argue that for symbols to be understood they must be associated with “something meaningful in human cognition that is ultimately grounded in experience and created via neural mechanisms.” Here, grounding in experience is in reference to theories of grounded cognition and refers to the self-generation of meaningful actions, whose purpose is to control the state of the organism within its physical environment by providing it with interpretable sensory stimulation [Cisek 1999]. An important point, then, is that this grounding not only affects our learning and understanding of existing objects by providing expected action-sensory contingencies, it also guides the active production of objects.

      One type of object production that has been shown to facilitate symbol learning, is writing by hand. Because the production of symbols has the defined purpose of communicating through the written word, humans have created writing tools that allow a greater degree of accuracy in the written form and that allow the transfer of permanent marks through ink or similar media. The manipulation of writing implements introduces yet another source of sensory information: haptic and kinesthetic cues that can augment and change motor production and visual perception. Nonetheless, visual-motor guidance of the writing device is still required, and this coupling of multimodal-multisensory information facilitates learning in much the same manner as other forms of active interactions. Handwriting symbols has been repeatedly correlated with increased competencies in symbol understanding (e.g., [Berninger et al. 1998, Richards et al. 2011, Li and James 2016). Although experimental studies are somewhat limited in adults, there is nonetheless a growing amount of research implicating the importance of self-generated action through writing on symbol understanding. In a series of pivotal and still highly influential studies, Freyd and colleagues showed that symbol recognition is significantly affected by our experiences with writing. For example, we are better able to identify static letters of the alphabet whose stroke directions and trajectories conform to how an individual creates those forms through handwriting, compared to similar letters that do not conform to the observer’s own stroke directions and trajectories [Freyd 1983, Babcock and Freyd 1988, Orliaguet et al. 1997].

      The interplay between the motor production of letters and the visual perception of letters has been demonstrated in individuals who have letter perception deficits. For example, individuals with pure alexia have difficulty recognizing letters. However, in some cases, their letter recognition can be facilitated by tracing the letter in the air with their finger or making hand movements that mimic the letter shape [Bartolomeo et al. 2002, Seki et al. 1995]. The interpretation of these case studies rests upon the fact that pure alexia results from damage to a specific brain region. The fact that damage to one location can result in a deficit in letter recognition that can be recovered to some extent by active interaction with the symbol’s form suggests that both visual and motor brain systems are involved in the perception of a symbol. In these cases, the patients’ actions facilitated their visual perceptions, evidence that the neural mechanisms subserving letter perception span both visual and motor brain regions as a result of prior multimodal experience.

      Moreover, the production of symbols may rely upon the same neural mechanisms as the recognition of symbols. When adults are asked to simultaneously identify letters and shapes presented in varying degrees of visual noise, their thresholds for detection were increased (worsened) if they were writing a perceptually similar letter or shape during symbol identification, compared to when writing a perceptually dissimilar shape [James and Gauthier 2006]. The interpretation of this study rests upon theories of neural interference, which suggest that if one behavioral task is significantly affected by another concurrent task, then the two share common mechanisms. The fact that an interference effect was observed, rather than a facilitation effect, suggests that the production of a form and the perception of the form overlapped in terms of their

Скачать книгу