The Handbook of Multimodal-Multisensor Interfaces, Volume 1. Sharon Oviatt

Чтение книги онлайн.

Читать онлайн книгу The Handbook of Multimodal-Multisensor Interfaces, Volume 1 - Sharon Oviatt страница 31

Автор:
Жанр:
Серия:
Издательство:
The Handbook of Multimodal-Multisensor Interfaces, Volume 1 - Sharon Oviatt ACM Books

Скачать книгу

experiences through the act of handwriting on brain development. In preliterate children, the perception of individual letters recruits bilateral fusiform gyri in visual association areas, bilateral intra-parietal sulcus in the visual-motor association brain regions, and left dorsal precentral gyrus in motor cortex, but only for letters with which the child has had handwriting experience (Figure 2.11) [James and Engelhardt 2012]. Indeed, the intraparietal sulcus responds more strongly for form-feature associations with letterforms (i.e., stronger for letters learned through handwriting than shapes learned through handwriting) whereas the left dorsal precentral gyrus responds more strongly for motor associations (i.e., stronger for letters learned through handwriting than tracing or typing). The functional connections between these three regions and the left fusiform gyrus show a similar pattern in preliterate children after handwriting practice [James and Engelhardt 2012, Vinci-Booher et al. 2016].

      Figure 2.11 (A) and (B) the difference in BOLD signal between handwriting > typing in the frontal premotor cortices; (C) difference between handwriting > tracing in precentral gyrus and parietal cortex; and (D) traced > typed activation in frontal cortex. (From James and Engelhardt [2012])

      The action of handwriting a letter feature-by-feature allows for multimodal input of information that can also be encoded though a single sense. It therefore transforms an inherently unisensory behavior (visual letter recognition before handwriting experience) into a multimodal behavior (visual letter recognition after handwriting experience). Multimodal behaviors promote the emergence of multisensory integration in evolutionarily early subcortical brain regions. They effectively structure the input to cortical regions and engender multimodal integration in the cortex, as outlined previously. The ability to transform inherently unimodal behaviors, such as visual perception of written forms or stationary objects, into inherently multimodal behaviors is the major utility provided by interacting with tools, such as writing implements.

      Interestingly, the pattern of activation that is seen after children learn to write letters by hand is only observed if the writing is self-produced. That is, if a child watches an experimenter produce the same forms, the multimodal network will not be evident during subsequent letter perception [Kersey and James ]. This latter result suggests that it is the multimodal production, not the multisensory perception that results in the emergence of the distributed brain network observed during letter perception.

      The extant literature on the neural substrates underlying multimodal-multisensory learning in young children clearly demonstrate that the visual perception of objects learned actively is not purely visual. Learning through action creates multimodal brain networks that reflect the multimodal associations learned through active interactions.

      This brief review of empirical studies suggests that the brain is highly adaptive to modes of learning. The input from the environment, through the body, is processed by the brain in a manner that requires high plasticity both in childhood and adulthood. The way in which we learn changes brain systems that, in turn, change our behaviors. As such, we argue that human behavior cannot be fully understood without a consideration of environmental input, bodily constraints, and brain functioning. Valuable insights can be gained by a thoughtful consideration of the rich data sets produced by brain imaging techniques. Understanding brain mechanisms that underlie behavior change our understanding of the hows and whys of the efficacy of technologies to assist learning.

      The embodied cognition perspective encompasses a diverse set of theories that are based on the idea that human cognitive and linguistic processes are rooted in perceptual and physical interactions of the human body with the world [Barsalou 2008, Wilson 2002]. According to this perspective, cognitive structures and processes—including ways of thinking, representations of knowledge, and methods of organizing and expressing information—are influenced and constrained by the specifics of human perceptual systems and human bodies. Put simply, cognition is shaped through actions by the possibilities and limitations afforded by the human body.

      The research outlined in this chapter clearly supports this embodied perspective. Learning is facilitated though bodily interactions with the environment. Such necessary competencies, such as visual perception, object knowledge, and symbol understanding, are determined by physical action. Often we consider these basic human abilities to be reliant on unimodal (usually visual) processing. However, more research is emerging that supports the notion that multimodal processing is key to acquiring these abilities, and further, that multisensory processing is created through action, an inherently multimodal behavior. Furthermore, the mechanisms that support multimodal-multisensory learning are becoming better understood. Multimodal-multisensory learning results in the recruitment of widespread neural networks that serve to link information that creates highly adaptive systems for supporting human behavior. Because of the importance of action for learning, and because action is reliant on the body, the work reviewed here outlines the importance of the embodied cognition standpoint for understanding human behavior.

      When we think about modern society and its reliance on human-computer interactions, it is wise to remember that our brains have adapted to an environment that existed hundreds of thousands of years prior to our use of screens and multimodal interfaces. Track balls, keyboards, and pens are not a part of the environment for which our brains have adapted. However, our early hominid ancestors did use tools (e.g., [Ambrose 2001]). Considering that many of our modern interfaces are tools helps to understand how humans have become so adept at using computer interfaces. Our brains are highly plastic in terms of integrating these bodily extensions into a pre-existing system, although they evolved to capitalize on relatively primitive forms of these instruments.

       Glossary

      Active behaviors: Overt movements performed by the observer. In this chapter, we restrict this definition to include intentional, or goal-directed actions (as opposed to reflexive actions).

      Amodal stimulus properties: Properties of an object that can be conveyed by more than one sensory system [Lickliter and Bahrick 2004]. By some definitions, amodal also refers to information that is no longer connected to the sensory modality by which it was encoded.

      Binding problem: Theoretical problem stemming from the required ability of the brain to bind sensory signals that trigger neuronal firing to the environmental stimulus from which those signals originate. This also entails the problem of how the brain combines signals from multiple senses into a unified percept, given that they detect different features of the stimuli.

      Blood-oxygen-level-dependent (BOLD): The primary measure of interest in fMRI. BOLD computes the ratio of deoxygenated to oxygenated hemoglobin in the brain. The more oxygen an area is consuming, the more active its neurons.

      Constructivism: A broad theoretical approach that considers the organism to be constructing meaning based on interactions with objects and people in the world.

      Convergence: The unique property of neural connections by which more than one type of unisensory neuron connects to the same neuron and may independently evoke neural activity from that neuron. This can be contrasted with integration, whereby the sensory signals are combined to produce a single response based on more than one input. Convergence results in multiple separable signals in a given neuron, whereas integration refers to combining multiple inputs into a single response (see Figure 2.12).

      Experience-dependent:

Скачать книгу