The Handbook of Multimodal-Multisensor Interfaces, Volume 1. Sharon Oviatt

Чтение книги онлайн.

Читать онлайн книгу The Handbook of Multimodal-Multisensor Interfaces, Volume 1 - Sharon Oviatt страница 25

Автор:
Жанр:
Серия:
Издательство:
The Handbook of Multimodal-Multisensor Interfaces, Volume 1 - Sharon Oviatt ACM Books

Скачать книгу

the future [Stein et al. 2014]. The implications of these phenomena are that events that violate the learned correspondences among sensory modalities are readily detected and, because they converge at such an early stage in sensory processing, are difficult for the perceiver to overcome. Furthermore, projections from SC are widely distributed throughout the cortex and are one of the major pathways by which sensory information reaches the cortex where, presumably, higher-level cognitive functions are carried out, such as object and symbol recognition.

      Neural plasticity, the ability of neuronal structures, connections, and processes in the brain to undergo experience-dependent changes, in the adult is well documented with respect to the initial learning of multimodal-multisensory mappings. However, the relearning of these mappings is a decidedly more laborious process, though initial and relearning stages both follow a few basic principles. Although several principles could be mentioned here, we will focus on two: Multisensory enhancement and multisensory depression. First, information about objects and events in the environment can be gleaned by simply relying upon spatiotemporal coincidence to indicate the presence of an object or event; this type of coordinated environmental stimuli contribute to multisensory enhancement. This phenomenon results in an increase in the system’s ability to detect the same object or event based on multisensory, or, to use another word, amodal, input in the future. Second, information about objects and events in the environment can be gleaned by simply relying upon spatiotemporal disparities to indicate the separability of objects or events; this type of uncoordinated environmental stimuli contribute to multisensory depression. This phenomenon results in a decrease in the system’s ability to detect either of those objects or events based on multisensory input in the future. In the case of multisensory enhancement, amodal cues are emphasized and in the case of multisensory depression, modality-specific cues are more readily emphasized. Thus, the functioning of the SC and corresponding connections appear to be foundational attention mechanisms associated with orienting to salient environmental stimulation based on the system’s history of sensory experiences [Johnson et al. 2015].

      These principles reflect our knowledge regarding the cellular mechanisms of neural plasticity. The principle of Hebbian learning is foundational to theories of experience-dependent brain changes as it proposes that axonal connections between neurons undergo activity-dependent changes. It has two basic tenants: (1) when two neurons repeatedly fire in a coordinated manner, the connections between them are strengthened, effectively increasing the likelihood of firing together in the future; and (2) when two neurons repeatedly fire in an uncoordinated manner, the connections between them weaken, effectively reducing the likelihood of firing together in the future [Hebb 1949]. The relevance of this theory to experience-dependent brain changes is most readily understood when considering the differences in brain systems supporting the recognition of objects that would result from active, as opposed to passive, behaviors. For this discussion, the crucial difference between active and passive behaviors is that the perceiver in active behaviors performs an action on stimuli in the environment. The crucial implication of that difference is that active behaviors are inherently multimodal, involving both action systems and perceptual systems, and multisensory, involving haptic and visual inputs at a minimum. Passive behaviors, on the other hand, often involve the stimulation of only one sensory modality, rendering them unisensory (see Chapter 3).

      Therefore, active interactions with the environment, as opposed to passive interactions, are inherently multisensory and multimodal. Not only do they entail input from various effectors (e.g., hands, eyes) based on the action (e.g., reaching to grasp, saccading), but they also simultaneously produce input to numerous sensory modalities (e.g., somatosensation, vision). Therefore, active interactions produce multisensory information that allow for coactivation of signals, resulting in multisensory enhancement. Beyond multisensory enhancement, however, active interactions have been thought to be necessary for any type of percept, be it unisensory or multisensory. That is, without physical movement, sensory information quickly fades. This phenomenon occurs because sensory receptor organs and neurons stop responding to repeated stimulation. One well-known example is that of the stabilized image phenomenon: if eye movements are inhibited, resulting in a stable retinal image, the resultant percept will dissipate within seconds (for review see [Heckenmueller 1965]). Although it is beyond the scope of this chapter to outline sensory and neural adaptation, it is important to note that for the brain to code change, and thus detect similarities and differences in incoming percepts, action is necessary.

      What effect does this knowledge of the mechanisms that underlie experience-dependent brain plasticity have on our understanding of learning in general? One argument for the usefulness of these findings is that they have led to numerous experiments that have shown that active interactions with the environment facilitate learning more than passive “interactions”. In their pivotal work, Held and Hein [1963] showed that kittens that were initially reared without visual experience were unable to learn to understand their world, even after their vision was restored, unless they were given the freedom to move around on their own volition. For comparison with the active learning experience given to this cohort of kittens, an experimental apparatus moved a separate cohort of kittens, who, therefore, experienced “passive” movements. The crucial difference is that the first cohort of kittens received visual stimulation that was contingent with their own self-generated movements, whereas the second cohort experienced visual stimulation that was not contingent with their own movements. This study spurred a wealth of research on the role of active vs. passive experiences on learning. These lines of research have largely confirmed the value of self-generated action for visual perception in many domains.

      In the context of the intersensory redundancy discussed earlier, active learning is the avenue by which multimodal-multisensory signals in the brain arise from the body and environment. In what follows, our goal is to provide a brief overview of some of the empirical work on how these multimodal-multisensory signals affect learning. We briefly outline empirical work that has shown that active interactions facilitate learning in surprising ways at times. We discuss findings from behavioral work and from neuroimaging work. The behavioral data show the usefulness of learning through action in numerous domains and the neuroimaging work suggests the mechanisms, at a brain systems level, that support active learning.

      Actively interacting with one’s environment involves multisensory, multimodal, processing. It is therefore, not surprising then, that these types of interactions facilitate learning in many domains. Here we briefly review findings from behavioral experiments that demonstrate the far-reaching beneficial effects that active experience has on learning.

       2.3.1 Behavioral Research in Adults

      Visual Perception Although somewhat controversial, the majority of research in psychology separates sensation from perception. A primary separation results from findings that perception relies on learning, in that information from prior experience changes how we perceive. When these experiences involve active interaction, we can assume that perception is changed by action. Our first example comes from the phenomenon of size constancy. Size constancy refers to our ability to infer that an object maintains its familiar size despite large changes in retinal size from visual sensation, due to distance. Distance cues that lead to size constancy can be from object movement, observer movement, or a combination of the two. Research has shown that size constancy is dependent on observer movement, not object movement. If size is manipulated through object movement, constancy is worse than if size is manipulated through observer movement [Combe and Wexler 2010].

       2.3.2 Spatial Localization

      It seems intuitive that locomotion is important for understanding the three-dimensional space around us. However, several lines of research support the claim that self-generated movement is the key to understanding spatial location of objects. For instance, if we encode the location of an object and then are blindfolded and

Скачать книгу