The Handbook of Multimodal-Multisensor Interfaces, Volume 1. Sharon Oviatt

Чтение книги онлайн.

Читать онлайн книгу The Handbook of Multimodal-Multisensor Interfaces, Volume 1 - Sharon Oviatt страница 39

Автор:
Жанр:
Серия:
Издательство:
The Handbook of Multimodal-Multisensor Interfaces, Volume 1 - Sharon Oviatt ACM Books

Скачать книгу

with longer training [Swerdfeger 2009] promises greater sophistication as the medium becomes more widespread.

      Variations among individuals in their experience of haptic sensations mean that specific design elements may not work for everyone. There are at least three levels at which such individual differences appear, each with its own design significance.

      In haptic perception, individual mechanoreceptors register signals with varying resolutions (analogously to visual color-blindness), evident in nonuniform tactile threshold and difference detection abilities [Lo et al. 1984], and typically investigated with psychophysical studies which exclude subjective components. For subtle sensations such as programmable friction, differences among people become more prominent [Levesque et al. 2012]. Tactile acuity also declines with age, suggesting this channel is not ideally targeted for seniors [Stevens 1992, Stevens and Choo 1996]. There is empirical evidence that the perceptual space of sensations is impacted by these differences; for example, people varied in categorizing natural textures according to a 2D vs. a 3D perceptual space [Hollins et al. 2000].

      At the level of haptic processing and memory, numerous studies on human ability to identify and parse tactile patterns exemplify differences in ability to process and learn haptic stimuli, with tactile the most frequently studied, e.g., [Epstein et al. 1989]. In particular, an early study by Craig [1977] suggests two groups—learners and non-learners—in a spatio-temporal pattern matching task with the Optacon. A more recent study on a variable friction display reports notable differences in users’ recognition of friction patterns and their spatial density [Levesque et al. 2012]. People also differ in the degree to which they rely on touch for hedonic or information gathering purposes, suggesting modality-specific processing needs and abilities [Peck and Childers 2003]. Haptic processing abilities can be improved with practice: visually impaired individuals often develop exceptional tactile processing abilities independently of their degree of childhood vision, demonstrating substantial brain plasticity [Goldreich and Kanics 2003].

      Because synthetic tactile feedbacks tend to be abstract, meanings must be mapped to signals. In the absence of a shared understanding for what these stimuli signify, meaning-mapping is driven by personal experience [Schneider and MacLean 2014, Alter and Oppenheimer 2009]. Individual differences in describing and preferring haptic sensations are thus dominated by personal schemas of interpretation and sense-making [Seifi and MacLean 2013, Seifi et al. 2015, Levesque et al. 2012].

      How can design practices accommodate and leverage such extensive differences in perception and interpretation?

      Haptic researchers have been looking for common themes in users’ perception from the start, and many do exist. Shared interpretations can be translated into guidelines for designing sensations that are distinguishable and expressive for at least significant group of users. For example, most individuals agree with urgency being represented by higher vibrotactile energy and frequency values. Common cultural connotations can also be transferred from other modalities. Audition contributes an understanding of rhythm [Brown et al. 2006a], and auditory icons can be mimicked to achieve a comparable shared perception in haptic counterparts, whether a direct translation or exploitation of underlying design principles and parameters. For example, van Erp and Spapé [2003] transforms 59 short music pieces into vibrotactile patterns, while Ternes and MacLean [2008] builds a large set of vibration icons using rhythm and pitch (frequency).

      Large individual differences in haptic perception necessitate evaluating designs at scale, with a larger participant pool. Crowdsourcing evaluation of haptic designs is an enabling new direction (Section 3.5.2).

      While guidelines enable haptic design for users in the large, support for customization is key to design effectiveness for individuals [Seifi et al. 2014, Seifi et al. 2015, Ledo et al. 2012]. Applications should enable individual haptic meaning-mapping by allowing users to choose desired settings or mappings for a piece of information. The ability to tune pre-designed sensations or create new ones can further support users in tweaking a signal to their specific usage context and preferences.

      It will often be necessary to provide non-haptic backup modalities. Some individuals will be unable (e.g., for reasons of sensory, cognitive, or situational constraints) or unwilling to utilize haptic feedback, ever or in some situations. Interaction designers must allow users to mute or switch to other modalities when needed. When a haptic element is the primary form of information display, as discussed in Section 3.2.3, this may require automatic translation between haptics and other modalities like audio and visual [Hoggan and Brewster 2007].

      Several factors make haptic design challenging from a technical standpoint today. Hardware elements are typically able to render just one perceptual haptic submodality: vibration or force, shape, texture, shear, or temperature. These hardware elements are difficult to integrate, resulting in sensations very different from touching in the real world. Hardware also differs greatly in expressive nature and degree, even for a given submodality, and there is a large impact of hardware configuration (weight, materials, etc.) on the resulting sensations.

      As a consequence, haptic effects generally must be designed for a specific hardware element, and cannot easily be transferred to another actuator of a different mechanism, manner of being worn, or performance. Moreover, there is a general dearth of tools and expertise for haptic design in industry, and shortage of examples and accepted practices to draw on. Tool development is a priority for the field, and we will offer a perspective of the space that tools do, and must, jointly cover in Sections 3.4.3 and 3.5.3.

      The touch sense is routinely used in a close partnership with other modalities, which must be considered at design time. Here we examine multimodal interaction holistically by analyzing several scenarios in terms of their interactive goals and features (Sections 3.2.1 and 3.2.2); zoom in to look at the roles haptic sensations take with other modalities (Section 3.2.3); and examine the contribution of haptics to those interactions (Section 3.2.4).

      We begin by considering how a multimodal interaction can be structured in terms of goals and design element parameters. We will use the scenarios laid out in Table 3.1 to show how their interactive goals and features define interaction requirements; then further build on these examples for the rest of the chapter. These structures are generally not orthogonal or mutually exclusive; they might appear alone or in combination.

      A holistic interaction is often dominated by a particular information display objective. For example, it might provide, notify, and/or guide, deploying a variety of sensor modalities as appropriate. The interaction goal can shift according to the user’s momentary need, and a display can reconfigure its utilities. To illustrate, a common current approach for a navigation interface on a mobile or wearable device is to guide with “push” auditory directives and/or vibrotactile feedback about an upcoming turn; when the user needs more detail, the map is provided on a graphical screen (scenarios [S1] and [S2] in

Скачать книгу