The Handbook of Multimodal-Multisensor Interfaces, Volume 1. Sharon Oviatt

Чтение книги онлайн.

Читать онлайн книгу The Handbook of Multimodal-Multisensor Interfaces, Volume 1 - Sharon Oviatt страница 43

Автор:
Жанр:
Серия:
Издательство:
The Handbook of Multimodal-Multisensor Interfaces, Volume 1 - Sharon Oviatt ACM Books

Скачать книгу

individual differences, for example through tools that allow users to efficiently customize their interfaces (Section 3.5.1).

       Stimulus Complexity and Vocabulary Composition

      Interpretive facets for haptics are not as developed as for other modalities, either culturally or in research. There is a relative wealth of immediately reliable visual idioms, e.g., a graphical stop-sign icon. Instead, haptic designers typically need to devise custom vocabularies [MacLean 2008b]. These vary by application requirements, which dictate size and complexity of the required set as well as the context of use and the hardware that will deliver it.

      We can start with simple signals. Simple vocabularies are composed of just two to three haptic-meaning pairs—binary and trinary sets, common in current mobile and wearable notification systems, easy to learn and adopt. The binary case can indicate on/off state of a parameter (e.g., a message has/has not arrived). A ternary vocabulary can distinguish three states (such as below/within/above a target zone, three levels of a volume being monitored, or three categories of notification types).

      Next, we have complex signals (a.k.a icon design). More detailed encodings/vocabularies are possible when the hardware and context allow a larger set of distinct stimuli and the user can learn and process a larger mapping [MacLean 2008b]. One design approach is to map information elements to design and engineering parameters of the haptic sensation [Brewster and Brown 2004, Enriquez et al. 2006, Ternes and MacLean 2008]. For example, vibrotactile technologies allow control of frequency, amplitude, waveform, plus temporal sequencing, such as rhythm. In a vibrotactile message notification, amplitude can be mapped to urgency while rhythm can encode the sender group (family/friends vs. work). This approach has the hierarchical structure of a natural language (e.g., letters, words, sentences) [Enriquez et al. 2006].

      An alternative approach uses metaphors for designing individual signals and sets of them in a haptic vocabulary. Here, the whole signal has a meaning but its individual components may not encode information, instead exploiting users’ interpretive frameworks for designing more intuitive vocabularies. In [Chan et al. 2008], a heartbeat indicates that the remote connection is live/on, using a metaphor framework.

      In both approaches, designers can use perceptual techniques such as Multi-Dimensional Scaling (MDS) or psychophysical studies to prune and refine an initial stimulus set for salience and maximum recognizability [Maclean and Enriquez 2003, Lederman and Klatzky 2009], both prior to encoding and to adjust the final set to optimize distinguishability [Chan et al. 2008].

      More complex vocabularies must be learned. Haptic-meaning pairs composed into vocabularies can utilize users’ interpretive frameworks or rely on learning through practice and memory. In the former case, the user should be able to recognize the associated meaning with no or minimal practice (e.g., an accelerating pulse sequence signifies “speed up”) whereas in the latter, sensations are arbitrarily assigned, necessitating prior exposure and memorization. In Figure 3.3, directions can be presented with two types of patterns, spatial and temporal: this particular spatial arrangement has a direct and recognizable perceptual association to the meaning, while the second pattern is arbitrary and will have to be learned.

      Past studies suggest that users can learn large abstract vocabularies (56 pairs) with practice but the learning rate and performance can vary considerably across individuals [Swerdfeger 2009]. Users’ performance on large vocabularies with intuitive meaning assignment is yet to be fully studied, in part because of the difficulty of designing them.

      Figure 3.3 Intuitive vs. abstract encoding of a direction vocabulary for a vibroactile seat. (a) User easily interprets intuitive encoding of direction with spatial parameters. (b) User learns abstract encoding of direction through temporal parameters.

      How do we translate knowledge of the physical and semantic haptic design space into compelling, coherent, and learnable haptic media, given the many and particular challenges it presents? The answer is a robust and flexible process. We draw upon a design thinking approach, often described as a funnel of idea candidates wherein the designer iteratively generates, refines and narrows down multiple ideas in parallel until a final, well-developed, and trusted design concept remains (Figure 3.4.

      We look now at how generic forms of design thinking must be adapted when applied to haptics, and offer several different schemas for approaching haptic design (including those introduced earlier for the user’s view of haptic sensations see Section 3.3.3). We close with an inventory of current haptic design tools and techniques.

      Figure 3.4 Incorporating haptics into the design process. We adapt the classic design funnel, where multiple initial ideas are iteratively developed, then add four design activities we have found useful when supporting design: browsing, sketching, refining and sharing. (Based on Buxton [2007])

       3.4.1 Design Process

      Understanding how best to support design and creativity has long been an important research topic. There is increasing evidence that designers’ environment and tools shape their output, especially their exposure to previous designs, flexible and precise tools, and collaborators [Herring et al. 2009, Schneider and MacLean 2014, Kulkarni et al. 2012, Dow et al. 2011]. We will look at how four design activities—browsing, sketching, refining, and sharing—look in the context of a principled haptic media design process; and where these activities differ from designing in other modalities.

      Alongside these activities, designers are constantly engaged in other tasks such as devising effective haptic-meaning mappings (encoding, Section 3.3.3), and evaluating designs, often with rating scales or qualitative feedback—against criteria described in Sections 3.1.5 and 3.2.4). These tasks sequence and bind design activities in specific ways that help accomplish a design goal.

       Browse

      No idea is born in isolation. Individual designers have a repertoire of previous experiences they have encountered while learning or through practice [Schön 1982]. In addition, design often starts with a “gather” step [Warr and O’Neill 2005]: viewing examples for inspiration and problem definition. Gathering often occurs explicitly at the start of a design process, and can reoccur during iteration. Tangible examples are corkboards and mood boards, which allow ideas to “bake in” to the background [Buxton 2007]. Software tools like d.tour [Ritchie et al. 2011] and Bricolage [Kumar et al. 2011] recommend websites for inspiration and can automatically generate new ideas by combining sites. Haptic designers, however, encounter modality-specific barriers when gathering, managing, and searching for examples.

      First, we require a way to represent sensations, singly and in collections. How do we store, view, and organize haptic experiences? Haptic technologies are often inherently interactive, part of a multimodal experience with visual and audio feedback, and can take a variety of physical forms depending on the output (and input) device. This last point is particularly bothersome should the user not have access to the original device type—imagine trying

Скачать книгу