The Handbook of Multimodal-Multisensor Interfaces, Volume 1. Sharon Oviatt

Чтение книги онлайн.

Читать онлайн книгу The Handbook of Multimodal-Multisensor Interfaces, Volume 1 - Sharon Oviatt страница 24

Автор:
Жанр:
Серия:
Издательство:
The Handbook of Multimodal-Multisensor Interfaces, Volume 1 - Sharon Oviatt ACM Books

Скачать книгу

to say and type what computers can understand. International Journal of Man-Machine Studies, 34:527–547. DOI: 10.1016/0020-7373(91)90034-5. 37, 38

      1. Approximately a 250 ms lag is required between speech and corresponding lip movements before asynchrony is perceived.

      2

       The Impact of Multimodal-Multisensory Learning on Human Performance and Brain Activation Patterns

       Karin H. James, Sophia Vinci-Booher, Felipe Munoz-Rubke

      The human brain is inherently a multimodal-multisensory dynamic learning system. All information that is processed by the brain must first be encoded through sensory systems and this sensory input can only be attained through motor movement. Although each sensory modality processes different signals from the environment in qualitatively different ways (e.g., sound waves, light waves, pressure, etc.), these signals are transduced into a common language in the brain. The signals are then associated and combined to produce our phenomenology of a coherent world. Therefore, the brain processes a seemingly unlimited amount of multisensory information for the purpose of interacting with the world. This interaction with the world, through the body, is multimodal. The body allows one to affect the environment through multiple motor movements (hand movements, locomotion, speech, gestures, etc.). These various actions, in turn, shape the multisensory input that the brain will subsequently receive. The feedforward-feedback loop that occurs every millisecond among sensory and motor systems is a reflection of these multisensory and multimodal interactions among the brain, body, and environment. As an aid to comprehension, readers are referred to this chapter’s Focus Questions and to the Glossary for a definition of terminology.

      In the following, we begin by delving deeper into how sensory signals are transduced in the brain and how multimodal activity shapes signal processing. We then provide samples of research that have demonstrated that multimodal interactions with the world, through action, facilitate learning. An overview of research on performance measured by overt behavioral responses in adult and developing populations is followed by examples of research on the effects that multimodal learning has on brain plasticity in adults and children. Together, the behavioral and neuroimaging literature underscore the importance of learning through multimodal-multisensory interactions throughout human development.

      The ultimate utility of a sensory mechanism is to convey information to an organism in the service of eliciting environmentally appropriate action. An interesting question arises in consideration of the inherently multisensory nature of behavior:

      How is it that the human perceptual system provides us with seamless experiences of objects and events in our environment?

      The difficulty in answering this question lies in one’s conception of the role of the human perceptual system. Approaching this question as a constructivist would lead to a major impasse: How it is that the brain is able to infer meaning from sensory input and translate among sensory modalities, given that these signals have little fidelity to the environmental stimulus by which they were evoked? Further, how are the signals combined given that the signal of one sense is not directly comparable to the signal of another? This impasse is referred to as the binding problem and is a logical outcome of a constructivist approach to the role of the human perceptual system [Bahrick and Lickliter 2002]. If each sensory modality is transduced into a unique neuronal firing pattern, then the only way to infer the appropriate response to that particular set of sensory input is to effectively combine them into a unified percept. On the other hand, recent theories of perceptual development portray the human perceptual system as a multimodal system that responds to unisensory and multisensory inputs with differential weighting on modality-specific stimulus properties and amodal stimulus properties, respectively [Stein and Rowland 2011, Stein et al. 2014]. Formally, this theory is referred to as intersensory redundancy (see Figure 2.1).

      Intersensory redundancy is based upon the observation that an organism and its environment are structured such that the sensory systems of an active perceiver will experience certain consistencies and inconsistencies. The detection of these consistencies is related to the ability of an object or event to produce sensory stimuli in more than one sensory system in a spatiotemporally consistent manner. The detection of inconsistencies is related to the inability of an object or event to produce such sensory stimulation. Whether or not an object or event is capable of producing spatiotemporally consistent multisensory stimulation is determined by whether or not the particular stimulus has properties that can be detected by more than one sensory system. Properties of sensory stimuli that may be detected in more than one sensory system are referred to as amodal stimulus properties. For instance, shape and texture are perceived in both visual and haptic modalities, just as intensity and spatiotemporal information may be perceived in both visual and auditory modalities. Properties of sensory stimuli that may only be detected in one sensory system are referred to as modality-specific properties. As examples, color can only be perceived in the visual modality, whereas tone can only be perceived in the auditory modality. Throughout ontogeny, the interaction between the neural structure of the organism and its environment result in a neural system that processes certain properties best in multisensory contexts and others best in unisensory contexts.

      Figure 2.1 A. Diagram of superior colliculus of the cat with associated cortical connections. B. The three sensory representations in the superior colliculus (visual, auditory and somatosensory) are organized in an overlapping topographical map. Space is coded anterior-posterior. And stimuli in space are coded by multiple modalities. In adults, this leads to enhancements of neuronal activity and an increase in behavioral responses. C. In early development, the unisensory inputs converge to produce multisensory neurons but these neurons cannot yet integrate their multisensory cross-modal inputs. This latter function only develops around 4 weeks of age. (From Stein et al. [2014])

      The importance of the distinction between multisensory and unisensory perceptual input is most evident in consideration of the ontogeny of multisensory neurons in the evolutionarily early subcortical structure of the superior colliculus (SC). Although multisensory neurons have been found in several brain regions and in many species, the anatomical nature of this region has been most extensively studied in cats [Stein et al. 2014]. The SC is unique, in that decidedly unisensory neurons from the retina, ear, and/or skin may all converge onto a single SC neuron [Wallace et al. 1992, 1993, Stein and Meredith 1993, Fuentes-Santamaria et al. 2009]. Note that convergence is not the same as integration, which is something that has recently been shown to be experience-dependent, at least in the sequentially early sensory convergence region of the SC [Wallace et al. 1992, Stein and Rowland 2011, Stein et al. 2014]. In other words, neurons from several sensory systems may converge at one point, but whether or not these signals are integrated into a single output depends upon the history that a give neuron had with receiving sensory signals. SC neurons in kittens are largely unisensory and only become multisensory if they receive stimulation from multiple sensory systems at the same time. Perhaps even more interesting from a developmental perspective: the emergence of multisensory neurons and their receptive fields are also experience-dependent [Stein and Rowland 2011]. Receptive fields not only change their size through experience, they also change their sensitivity to physical locations. Repeated exposure to a visual stimulus and an auditory stimulus that occur at the same time and at the same physical location will result in overlapping receptive fields. Overlapping receptive fields from more than one sensory system results in faster, stronger, and more reliable neural responses for that combination of stimuli at

Скачать книгу