The Handbook of Multimodal-Multisensor Interfaces, Volume 1. Sharon Oviatt

Чтение книги онлайн.

Читать онлайн книгу The Handbook of Multimodal-Multisensor Interfaces, Volume 1 - Sharon Oviatt страница 30

Автор:
Жанр:
Серия:
Издательство:
The Handbook of Multimodal-Multisensor Interfaces, Volume 1 - Sharon Oviatt ACM Books

Скачать книгу

Image

      Figure 2.7 Functional connectivity between the visual Lateral Occipital Complex (LOC) regions and motor regions in the brain after active learning. Note that left side of the brain in figure is right hemisphere due to radiological coordinate system. (From Butler and James [2013])

      Indeed, the perception of individual letters has been shown to be supported by a neural system that encompasses both sensory and motor brain regions, often including ventral-temporal, frontal, and parietal cortices [Longcamp et al. 2003, 2008, 2011, 2014, James and Gauthier 2006, James and Atwood 2009]. We investigated the neural overlap between writing letters and perceiving letters and found that even when participants did not look at the letters they were writing, significant overlap in brain systems emerged for writing and perception tasks with letters (see Figure 2.8). This network included not only the usual visual regions observed when one perceives letters, but also an extended frontal network that included dorsal and ventral regions that are known to code actions and motor programs. As depicted in Figure 2.8, the putative visual region was active during letter writing, and the traditional motor areas were active during letter perception. These results suggest that action and perception, even in the case of symbol processing, automatically recruit an integrated multimodal network.

      Figure 2.8 A schematic of results from James and Gauthier [2006], showing the overlap in brain activation patterns as a result of handwriting, perceiving and imagining letters. (From James and Gauthier [2006])

      This overlap in activation led us to the next obvious question: What are the experiences that create such a system? It is possible that any motor act with symbols would recruit such a system, but it could also be that the creation of the symbols by hand, feature-by-feature, may serve to pair the visual input and motor output during this behavior. To test this idea, we had adult participants learn a novel symbol system, pseudoletters, through two types of multimodal-multisensory learning (writing + vision and typing + vision) and one type of multisensory learning (vision + audition) [James and Atwood 2009]. Results demonstrated that only after learning the pseudoletters through handwriting the left fusiform gyrus active for these novel forms (Figure 2.9). This finding was the first to show that this “visual” region was affected by how a stimulus was learned. More specifically, it was not responding only to the visual presentation of letters, but was responding to the visual presentation of a letter with which the observer had a specific type of motor experience: handwriting experience. Furthermore, the dorsal precentral gyrus seen in the above study for letter perception and writing was also active only after training that involved multimodal learning (writing + vision and typing + vision) but not through training that involved only multisensory (vision + audition) learning. Thus, the network of activation seen for symbol processing is formed by multimodal experience of handwriting.

      Figure 2.9 The difference between trained and untrained pseudoletters in the left fusiform as a function of training experience. Note that the left fusiform gyrus is a visual processing region, in which only handwriting (labeled as motor in this figure) experience resulted in greater activation after learning. (From James and Atwood [2009])

      Methodological limitations precludes the use of fMRI in infants and toddlers that are fully awake, because it requires that individuals stay still for a minimum of 5 min at a time, usually for 30 min in total. Nonetheless, because we are very interested in how multimodal-multisensory learning serves to create neural systems, we routinely scan 4–6-year-old children in the hopes that we can observe the development of these neural systems. In the summary that follows, we outline studies that have investigated how multimodal networks emerge. In other words, what experiences are required for this functional network to be automatically activated?

       2.6.1 The Multimodal Network Underlying Object and Verb Processing

      There is now ample evidence that reading verbs recruits the motor systems that are used for performing the action that a verb describes [Hauk et al. 2003, Pulvermüller 2005, 2012, 2013]. These findings suggest that the perception of verbs re-activates motor systems that were used when the word was encountered during action, and, by extension, also suggest that only actions that we have performed will recruit the motor system during perception. However, not all verbs describe actions with which we have personal experience (e.g., skydiving), which begs the question: Do our action systems then become linked with perception if we simply watch an action? The research in this realm is controversial. Some studies have shown that indeed, action systems are recruited during action observation (e.g., [Gallese et al. 1996, Rizzolatti and Craighero 2004]), while others have shown that we have to have had personal experience performing the action for these systems to be automatically recruited [Lee et al. 2001].

      We wished to address this question with children, given their relatively limited personal experience with actions and their associated verbs. We asked whether multimodal recruitment would occur after children watched another person perform an action during verb learning, or if they had to perform the action themselves in order for these multimodal systems to be recruited. To test this idea, we asked 5-year-old children to learn new verbs (such as “yocking”) that were associated with an action performed on a novel object. Children learned the action/verb/object associations either by watching an experimenter perform the action or through performing the action themselves. We then measured their BOLD activation patterns when they saw the novel objects or heard the novel words in a subsequent fMRI scanning session [James and Swain 2011]. The results were clear: only when children acted on the objects themselves while hearing/seeing the novel stimuli did a multimodal network emerge (see Figure 2.10).

      A follow-up study directly tested the effects of multimodal-multisensory learning by having 5-year-old children learn the noises that objects made as they interacted with them, or watched an experimenter [James and Bose 2011]. The procedure was the same as above except that instead of learning a verb, the participants heard a sound that was associated with object manipulation. Again, only after self-produced action during learning was the extended multimodal network of activation recruited in the brain.

      The action that is coded during learning is, therefore, part of the neural representation of the object or word. In essence, the multimodal act of object manipulation serves to link sensory and motor systems in the brain.

      Figure 2.10 fMRI results after children learn verbs through their own actions or by passively watching the experimenter compared with unlearned verbs. The upper panel shows the response to hearing the new verbs, while the lower panel depicts activation when children saw the novel objects. Only learning through action resulted in high motor system activity. Top left and right graphs: Middle frontal gyri; middle graphs: Inferior parietal sulci; bottom graph: left primary motor cortex. Note: left is right hemisphere. (From James and Swain [2011])

       2.6.2 Neural Systems Supporting Symbol Processing in Children

      The developmental trajectory of the neural system supporting letter perception clearly displays the importance of multimodal-multisensory

Скачать книгу