The Handbook of Multimodal-Multisensor Interfaces, Volume 1. Sharon Oviatt

Чтение книги онлайн.

Читать онлайн книгу The Handbook of Multimodal-Multisensor Interfaces, Volume 1 - Sharon Oviatt страница 33

Автор:
Жанр:
Серия:
Издательство:
The Handbook of Multimodal-Multisensor Interfaces, Volume 1 - Sharon Oviatt ACM Books

Скачать книгу

or group of neurons. For example, neurons in the visual system respond to specific areas in space that are coded on the retina. Neurons in the auditory system respond to a specific range of sound frequencies.

      Size constancy: A form of perceptual constancy that refers to the tendency of an observer to infer a constant object size from different viewing distances, despite the fact that the different viewing distances project different image sizes onto the retina (see Figure 2.15).

      Spatial resolution: The number of units (e.g., pixels) per squared area used to construct an image, which is related to the amount of detail that an image can convey. Higher resolution images have more units per squared area than lower resolution images.

      Stabilized image phenomenon: Phenomenon whereby a percept provided by a visual stimulus will fade away if the visual stimulus is continually projected onto the same retinal cells. This phenomenon only occurs when the eyes are artificially stabilized to prevent movement.

      Superior colliculus: A layered subcortical structure containing multiple classes of neurons that interconnect with an array of sensory and motor neurons, including early sensory receptors and later cortical areas.

      Time-locked: Signal processing concept that describes the response behavior of signals to a particular stimulus onset as being inextricably bound in time. For example, if a ganglion cell in the retina is alternately exposed to light and dark, the cell will respond during light and rest during dark. That is, the response of the ganglion cell is time-locked with the onset of changes in the light stimulus.

      Unisensory neuron: A neuron that receives input from a single sense. For example, neurons in the primary visual cortex receive input from other neurons that transduce light waves into neural signal, but not from neurons that transduce sound waves.

      Viewpoint-independent object recognition: The ability of an observer to know that an object remains the same from different viewpoints, despite potential variations in shape contour, size, or other features projected onto the retina.

      Visual cliff: A psychological assessment tool, often used as a measure of depth perception development, in which an infant is placed atop an apparatus and allowed to explore the visually displayed depth difference by crawling (Figure 2.3).

      Visual noise: Any image manipulation that degrades, distorts, or hides aspects of an underlying visual image. One common perturbation is applying Gaussian noise to an image (see Figure 2.16).

      Figure 2.12 Graphical depiction of convergence and integration of neural signals.

      Figure 2.13 In this image, the left crib is a planar, side view, and the right crib is a non-planar view. (From Pereira et al. [2010])

      Figure 2.14 The (C) pseudoletter would control for stroke features, thickness, orientation, and size of the Roman letter A, whereas (B) controls for thickness and size, and (A) controls for features, thickness, and size.

      Figure 2.15 Classic demonstration of size constancy. Both persons appear to be nearly the same size, although if the image of the person further away is moved closer it becomes obvious that the image of the second person is substantially smaller. (From Boring [1964])

      Figure 2.16 Example of stimuli with different levels of visual noise added. The top row has more noise added, and is slightly above visual detection threshold. The bottom row has less noise added, and is well above visual detection threshold. (From James and Gauthier [2009])

      How can we capitalize on our knowledge of embodied learning to better design human-computer interfaces? First, we must remember that self-generated action is important for learning. Therefore, interfaces that capitalize on self-produced actions, such as touchscreens and styluses, will facilitate learning, because they allow the visual and motor systems to interact and form important links for learning (see Chapter 3 this volume). Further, the types of actions may also be important. For example, recall that multimodal neural systems are not created through typing to the same extent as writing. We would be well advised therefore, to continue the use of tools that require construction of the 2D form rather than actions that are farther removed from the produced symbol, such as typing. In addition, multisensory coding of information aids learning. Although we discussed one way in which this occurs (through action), one can imagine that adding multisensory information in general through interface design may be beneficial for student learning, potentially through haptic interfaces that make use of somatosensation (see Chapter 3 this volume). Immersive environments in which learners can self-locomote may also benefit learning, especially when considering understanding of spatial relations.

      In short, self-guided actions facilitate learning. To increase the efficacy of human-computer interfaces, we would do well to support such actions in interface design. As such, the goal would be to capitalize on our knowledge of how the brain supports human behavior in the service of increasing learning through interface design.

      2.1. In what ways are actions multimodal? Compare and contrast multisensory and multimodal as described in this chapter. Do you think these concepts are completely separable?

      2.2. Sensory signals are combined in many brain areas, the first being the superior colliculus. How are sensory signals combined in this structure to overcome the binding problem? How is this related to intersensory redundancy?

      2.3. What types of behaviors are enhanced by multimodal interactions with the environment across development? In what way are these behaviors changed by this experience?

      2.4. How does learning through action change the brain? Include examples of learning about objects and symbols. What would be a practical implication of learning about three-dimensional objects through action vs. through observation?

      2.5. Why would handwriting facilitate learning and recruit brain systems used for reading more than typing?

      2.6. How can we use our knowledge of brain systems to aid in the construction of multimodal interfaces?

      2.7. Given the information provided in this chapter, what are some other multimodal interfaces that would facilitate learning?

      2.8. Would the understanding of word meaning be facilitated by multimodal learning? Would some words benefit more

Скачать книгу