The Handbook of Multimodal-Multisensor Interfaces, Volume 1. Sharon Oviatt

Чтение книги онлайн.

Читать онлайн книгу The Handbook of Multimodal-Multisensor Interfaces, Volume 1 - Sharon Oviatt страница 29

Автор:
Жанр:
Серия:
Издательство:
The Handbook of Multimodal-Multisensor Interfaces, Volume 1 - Sharon Oviatt ACM Books

Скачать книгу

so far regarding neural functioning from comparative work with other species and human neuroimaging studies. This is especially true when we consider the effects that multimodal-multisensory experiences have on learning. As outlined in the introduction, learning in the brain occurs through association of inputs. We argue here that human action serves to combine multisensory inputs, and as such, is a crucial component of learning. This claim is based on the assumption that there are bidirectional, reciprocal relations between perception and action (e.g., [Dewey 1896, Gibson 1979]). From this perspective, action and perception are intimately linked: the ultimate purpose of perception is to guide action (see, e.g., [Craighero et al. 1996]), and actions (e.g., movements of the eyes, heads, and hands) are necessary in order to perceive (e.g., [Campos et al. 2000, O’Regan and Noë 2001]). When humans perceive objects, they automatically generate actions appropriate for manipulating or interacting with those objects if they have had active interactions with them previously [Ellis and Tucker 2000, Tucker and Ellis 1998]. Therefore, we know that actions and perceptions form associated networks in the brain under some circumstances. Thus, perception and action become linked through our multimodal experiences.

      Figure 2.5 Examples of handwritten letters by 4- year-old children. Top row are traces, bottom two rows are handwritten free-hand.

      In what follows, we provide some evidence of this linking in the brain and the experiences that are required to form these multimodal-multisensory networks. We will focus on functional magnetic resonance imaging (fMRI) as a method of human neuroimaging, given its high spatial resolution of patterns of neural activation, safety, widespread use in human research, and applicability to research involving the neural pathways created through learning. We will focus on a multimodal network that includes: (1) the fusiform gyrus, a structure in the ventral temporal-occipital cortex that has long been known to underlie visual object processing and that becomes tuned, with experience, for processing faces in the right hemisphere (e.g., [Kanwisher et al. 1997]) and letters/words in the left hemisphere (e.g., [Cohen and Dehaene 2004]); (2) the dorsal precentral gyrus, which is in the top portion of the primary motor cortex, a region that has long been known to produce actions [Penfield and Boldfrey 1937]; (3) the middle frontal gyrus, a region in the premotor cortex, involved with motor programming and traditionally thought to underlie fine-motor skills (e.g., [Exner 1881, Roux et al. 2009]); and (4) the ventral primary motor/premotor cortex, that overlaps with Broca’s area, a region thought to underlie speech production (e.g., [Broca 1861]]) and that has, more recently, become associated with numerous types of fine motor production skills (for review see [Petrides 2015]). As outlined below, this network only becomes linked after individuals experience the world through multimodal-multisensory learning.

      Functional neuroimaging methods provide us with unique information about neural patterns that occur with an overt behavioral response. As such, the resultant data is correlational, but highly suggestive of the neural patterns that underlie human behavior.

      In additional to providing the neural correlate of overt behavior, neuroimaging studies can also generate hypotheses and theories about human cognition. In what follows, we briefly outline the use of this method in the service of understanding the experiences that are required to link sensory and motor systems.

       2.5.1 The Effects of Action on Sensory Processing of Objects

      According to embodied cognition models, a distributed representation of an object concept is created by brain-body-environment interactions (see [Barsalou et al. 2003]). Perception of a stimulus via a single sensory modality (e.g., vision) can therefore engage the entire distributed representation. In neuroimaging work, this is demonstrated when motor systems in the brain are activated when participants simply look at objects that they are accustomed to manipulating (e.g., [Greezes and Decety 2002]), even without acting upon the objects at that time. Motor system activation is more pronounced when participants need to make judgments about the manipulability of objects rather than about the function of objects [Boronat et al. 2005, Buxbaum and Saffran 2002, Simmons and Barsalou 2003]. The motor system is also recruited in simple visual perception tasks. For example, in a recent study, we asked adult participants to study actual novel objects that were constructed to produce a sound upon specific types of interactions (e.g., pressing a top made a novel, rattling sound) (see Figure 2.6). Participants learned these novel sound-action-object associations in one of two ways: either by actively producing the sound themselves (active interaction) or through watching an experimenter produce the sounds (passive interaction). Note that in both cases, there was equal exposure to the visual and auditory information. The only difference was that in one case they produced the sound themselves instead of watching another produce the sound.

      Figure 2.6 Examples of novel, sound producing objects. (From Butler and James [2013])

      After participants studied these objects and learned the pairings, they underwent fMRI scanning while they were shown static photos of the objects with sound and without sound, as well as hearing the sounds alone. We then probed brain responses though blood-oxygen-level-dependent (BOLD) activation in regions of the multimodal network (see Section 2.5 and Figure 2.7). We observed that there was significantly greater activation after the active learning experience compared to the passive learning experience [Butler and James 2013]. Only active learning served to recruit the extended multimodal network including frontal (motor) and sensory brain regions. Thus, simply perceiving the objects or hearing the sound that was learned automatically recruited the sensory and motor brain regions used during the learning episode. Furthermore, we were interested in whether or not the visual regions were functionally connected to the motor regions. Assessing functional connectivity allows one to investigate whether the active regions are recruited together because of the task, or whether the recruitment is due to other factors (such as increased physiological responses). Indeed, only after active learning were visual seed regions (in blue in Figure 2.7) functionally connected to motor regions (in orange Figure 2.7).

       2.5.2 Neural Systems Supporting Symbol Processing in Adults

      Symbols represent an interesting category with which to study questions of interactions among sensory and motor brain systems. Just as multisensory experiences help shape the functioning of the subcortical region of the superior colliculus, active interactions and their inherent multisensory nature have been shown to shape the functioning of cortical brain regions. During handwriting, the motor experience is spatiotemporally coordinated with the visual and somatosensory input. Although we focus largely on the visual modality, it is important to note that, to the brain, an action such as handwriting is marked by a particular set of multisensory inputs along with motor to sensory mappings, or multimodal mappings. For example, the creation of a capital “A” results in a slightly different multimodal pairing than the creation of a capital “G”. Similarly, writing a capital “A” requires a different motor command than writing a capital “G” and each action looks and feels different. If we were to extrapolate single neuron data and apply it to human systems neuroscience, we could speculate that the visual stimulation of a capital “A” invokes subcortical multisensory neurons tuned through multisensory experiences (or, in the case of visual only learning, unimodal neurons) that pass information to cortical regions associated with the multimodal experience of handwriting.

Скачать книгу