The Handbook of Multimodal-Multisensor Interfaces, Volume 1. Sharon Oviatt

Чтение книги онлайн.

Читать онлайн книгу The Handbook of Multimodal-Multisensor Interfaces, Volume 1 - Sharon Oviatt страница 16

Автор:
Жанр:
Серия:
Издательство:
The Handbook of Multimodal-Multisensor Interfaces, Volume 1 - Sharon Oviatt ACM Books

Скачать книгу

of human responsiveness to objects and events, especially in adverse conditions such as noise or darkness [Calvert et al. 2004, Oviatt 2000, Oviatt 2012]. From an evolutionary perspective, these behavioral adaptations have directly supported human survival in many situations.

      In addition to promoting better understanding of multisensory perception, Gestalt theoretical principles have advanced research on users’ production of multimodal constructions during human-computer interaction. For example, studies of users’ multimodal spoken and written constructions confirm that integrated multimodal constructions are qualitatively distinct from their unimodal parts. In addition, Gestalt principles accurately predict the organizational cues that bind this type of multimodal construction [Oviatt et al. 2003]. In a pen-voice multimodal interface, a user’s speech input is an acoustic modality that is structured temporally. In contrast, her pen input is structured both temporally and spatially. Gestalt theory predicts that the common temporal dimension will provide organizational cues for binding these modalities during multimodal communication. That is, modality co-timing will serve to indicate and solidify their relatedness [Oviatt et al. 2003]. Consistent with this prediction, research has confirmed the following:

      • Users adopt consistent co-timing of individual signals in their multimodal constructions, and their habitual pattern is resistant to change.

      • When system errors or problem difficulty increase, users adapt the co-timing of their individual signals to fortify the whole multimodal construction.

      Figure 1.1 Model of average temporal integration pattern for simultaneous and sequential integrators’ typical multimodal constructions. (From Oviatt et al. [2005])

      From a neuroscience perspective, the general importance of modality co-timing is highlighted by previous findings showing that greater temporal binding, or synchrony of neuronal oscillations involving different sources of sensory input, is associated with improved task success. For example, correctly recognizing people depends on greater neural binding between multisensory regions that represent their appearance and voice [Hummel and Gerloff 2005].

      Studies with over 100 users—children through seniors—have shown that users adopt one of two types of temporal organizational pattern when forming multimodal constructions. They either present simultaneous constructions in which speech and pen signals are overlapped temporally, or sequential ones in which one signal ends before the second begins and there is a lag between them [Xiao et al. 2002, 2003]. Figure 1.1 illustrates these two types of temporal integration pattern. A user’s dominant integration pattern is identifiable almost immediately, typically on the very first multimodal construction during an interaction. Furthermore, her habitual temporal integration pattern remains highly consistent (i.e., 88–93%), and it is resistant to change even after instruction and training.

      A second Gestalt law, the principle of area, states that people will tend to group elements to form the smallest visible figure or briefest temporal interval. In the context of the above multimodal construction co-timing patterns, this principle predicts that most people will deliver their signal input simultaneously. Empirical research has confirmed that 70% of people across the lifespan are indeed simultaneous signal integrators, whereas 30% are sequential integrators [Oviatt et al. 2003].

      Figure 1.2 Average increased signal overlap for simultaneous integrators in seconds (left), but increased lag for sequential integrators (right), as they handle an increased rate of system errors. (From Oviatt and Cohen [2015])

      An important meta-principle underlying all Gestalt tendencies is the creation of a balanced and stable perceptual form that can maintain its equilibrium, just as the interplay of internal and external physical forces shape an oil drop [Koffka 1935, Kohler 1929]. Gestalt theory states that any factors that threaten a person’s ability to achieve a goal create a state of tension, or disequilibrium. Under these circumstances, it predicts that people will fortify basic organizational phenomena associated with a percept to restore balance [Koffka 1935, Kohler 1929]. As an example, if a person interacts with a multimodal system and it makes a recognition error so she is not understood, then this creates a state of disequilibrium. When this occurs, research has confirmed that users fortify, or further accentuate, their usual pattern of multimodal signal co-timing (i.e., either simultaneous or sequential) by approximately 50%. This phenomenon is known as multimodal hypertiming [Oviatt et al. 2003]. Figure 1.2 illustrates increased multimodal signal overlap in simultaneous integrators, but increased signal lag in sequential integrators as they experience more system errors. Multimodal hyper-timing also has been demonstrated in users’ constructions when problem difficulty level increases [Oviatt et al. 2003].

      From a Gestalt viewpoint, this behavior aims to re-establish equilibrium by fortifying multimodal signal co-timing, the basic organizational principle of such constructions, which results in a more coherent multimodal percept under duress. This multimodal hyper-timing contributes to hyper-clear communication that increases the speed and accuracy of perceptual processing by a listener. This manifestation of hyper-clear multimodal communication is qualitatively distinct from the hyper-clear adaptations observed in unimodal components. For example, in a unimodal spoken construction users increase their speech signal’s total length and degree of articulatory control as part of hyperarticulation when system errors occur. However, this unimodal adaptation diminishes or disappears altogether when speech is part of a multimodal construction [Oviatt et al. 2003].

      The Gestalt law of symmetry states that people have a tendency to perceive symmetrical elements as part of the same whole. They view objects as symmetrical, formed around a center point. During multimodal human-computer interaction involving speech and pen constructions, Gestalt theory would predict that more symmetrical organization entails closer temporal correspondence or co-timing between the two signal pieces, and a closer matching of their proportional length. This would be especially evident in significantly increased co-timing of the component signals’ onsets and offsets. Research on multimodal interaction involving speech and pen constructions has confirmed that users increase the co-timing of their signal onsets and offsets during disequilibrium, such as when system errors increase [Oviatt et al. 2003].

      In summary, the Gestalt principles outlined above have provided a valuable framework for understanding how people perceive and organize multisensory information, as well as multimodal input to a computer interface. These principles have been used to establish new requirements for multimodal speech and pen interface design [Oviatt et al. 2003]. They also have supported computational analysis of other types of multimodal system, for example involving pen and image content [Saund et al. 2003].

      One implication of these results is that time-sensitive multimodal systems need to accurately model users’ multimodal integration patterns, including adaptations in signal timing that occur during different circumstances. In particular, user-adaptive multimodal processing is a fertile direction for system development. One example is the development of new strategies for adapting temporal thresholds in time-sensitive multimodal architectures during the fusion process, which could yield substantial improvements in system response speed, robustness, and overall usability [Huang and Oviatt 2005, Huang et al. 2006].

      An additional implication is that Gestalt principles, and the multisensory research findings that have further elaborated them, potentially can provide useful guidance for designing “well integrated” multimodal interfaces [Reeves et al. 2004]. Researchers have long been interested in defining what it means to be a well-integrated multimodal interface, including the circumstances under which super-additivity effects can be expected rather

Скачать книгу