The Handbook of Multimodal-Multisensor Interfaces, Volume 1. Sharon Oviatt

Чтение книги онлайн.

Читать онлайн книгу The Handbook of Multimodal-Multisensor Interfaces, Volume 1 - Sharon Oviatt страница 7

Автор:
Жанр:
Серия:
Издательство:
The Handbook of Multimodal-Multisensor Interfaces, Volume 1 - Sharon Oviatt ACM Books

Скачать книгу

Iss.), 209–232.

      Figure 6.2 Based on: Wagner, P., Malisz, Z., & Kopp, S. (2014). Gesture and Speech in Interaction: An Overview. Speech Communication, 57(Special Iss.), 209–232.

      Figure 6.3 Based on: Kopp, S., Bergmann, K., & Kahl, S. (2013). A spreading-activation model of the semantic coordination of speech and gesture. Proceedings of the 35th Annual Meeting of the Cognitive Science Society (pp. 823–828). Austin, TX, USA: Cognitive Science Society.

      Figure 6.4 Based on: Kopp, S., Bergmann, K., & Kahl, S. (2013). A spreading-activation model of the semantic coordination of speech and gesture. Proceedings of the 35th Annual Meeting of the Cognitive Science Society (pp. 823–828). Austin, TX, USA: Cognitive Science Society.

      Figure 6.7 From: Bergmann, K., Kahl, S., & Kopp, S. (2013). Modeling the semantic coordination of speech and gesture under cognitive and linguistic constraints. In R. Aylett, B. Krenn, C. Pelachaud, & H. Shimodaira (Eds.), Proceedings of the 13th International Conference on Intelligent Virtual Agents (pp. 203–216). Copyright © 2013, Springer-Verlag Berlin Heidelberg. Used with permission.

      Figure 6.8 From: Bergmann, K., Kahl, S., & Kopp, S. (2013). Modeling the semantic coordination of speech and gesture under cognitive and linguistic constraints. In R. Aylett, B. Krenn, C. Pelachaud, & H. Shimodaira (Eds.), Proceedings of the 13th International Conference on Intelligent Virtual Agents (pp. 203–216). Copyright © 2013, Springer-Verlag Berlin Heidelberg. Used with permission.

      Figure 7.3 (video) From: Wilson, G., Davidson, G., & Brewster, S. (2015). In the Heat of the Moment: Subjective Interpretations of Thermal Feedback During Interaction. Proceedings CHI ’15, 2063–2072. Copyright © 2015 ACM. Used with permission.

      Figure 7.4 From: David K. McGookin and Stephen A. Brewster. 2006. SoundBar: exploiting multiple views in multimodal graph browsing. In Proceedings of the 4th Nordic conference on Human-computer interaction: changing roles (NordiCHI ’06), Anders Mørch, Konrad Morgan, Tone Bratteteig, Gautam Ghosh, and Dag Svanaes (Eds.), 145–154. Copyright © 2006 ACM. Used with permission.

      Figure 7.5 (video) From: Ross McLachlan, Daniel Boland, and Stephen Brewster. 2014. Transient and transitional states: pressure as an auxiliary input modality for bimanual interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’14). Copyright © 2014 ACM. Used with permission.

      Figure 7.6 (video) Courtesy of David McGookin, Euan Robertson, and Stephen Brewster. Used with permission.

      Figure 7.7 (left) Video courtesy of David McGookin and Stephen Brewster. Used with permission.

      Figure 7.7 (right) Video courtesy of David McGookin and Stephen Brewster. Used with permission.

      Figure 7.8 (left) From: Plimmer, B., Reid, P., Blagojevic, R., Crossan, A., & Brewster, S. (2011). Signing on the tactile line. ACM Transactions on Computer-Human Interaction, 18(3), 1–29. Copyright © 2011 ACM. Used with permission.

      Figure 7.8 (right) From: Yu, W., & Brewster, S. (2002). Comparing two haptic interfaces for multimodal graph rendering. Proceedings HAPTICS ’02, 3–9. Copyright © 2002 IEEE. Used with permission.

      Figure 7.9 (video) From: Beryl Plimmer, Peter Reid, Rachel Blagojevic, Andrew Crossan, and Stephen Brewster. 2011. Signing on the tactile line: A multimodal system for teaching handwriting to blind children. ACM Transactions on Computer-Human Interaction 18, 3, Article 17 (August 2011), 29 pages. Copyright © 2011 ACM. Used with permission.

      Figure 7.10 (right) From: Euan Freeman, Stephen Brewster, and Vuokko Lantz. 2014. Tactile Feedback for Above-Device Gesture Interfaces: Adding Touch to Touchless Interactions. In Proceedings of the 16th International Conference on Multimodal Interaction (ICMI ’14), 419–426. Copyright © 2014 ACM. Used with permission.

      Figure 7.11 (left) Video courtesy of Euan Freeman. Used with permission.

      Figure 7.11 (right) Video courtesy of Euan Freeman. Used with permission.

      Figure 7.12 (video) From: Ioannis Politis, Stephen A. Brewster, and Frank Pollick. 2014. Evaluating multimodal driver displays under varying situational urgency. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’14). New York, NY, USA, 4067–4076. Copyright © 2014 ACM. Used with permission.

      Figure 8.1 Based on: A. H. Maslow. 1954. Motivation and personality. Harper and Row.

      Figure 8.2 (left) From: D. McColl, W.-Y. G. Louie, and G. Nejat. 2013. Brian 2.1: A socially assistive robot for the elderly and cognitively impaired. IEEE Robotics & Automation Magazine, 20(1): 74–83. Copyright © 2013 IEEE. Used with permission.

      Figure 8.2 (right) From: P. Bovbel and G. Nejat, 2014. Casper: An Assistive Kitchen Robot to Promote Aging in Place. Journal of Medical Devices, 8(3), p.030945. Copyright © 2014 ASME. Used with permission.

      Figure 8.2 (video) Courtesy of the Autonomous Systems and Biomechtronics Laboratory (ASBLab) at the University of Toronto

      Figure 8.3 From: M. Nilsson, J. Ingvast, J. Wikander, and H. von Holst. 2012. The soft extra muscle system for improving the grasping capability in neurological rehabilitation. In Biomedical Engineering and Sciences (IECBES), 2012 IEEE EMBS Conference on, pp. 412–417. Copyright © 2012 IEEE. Used with permission.

      Figure 8.4 From: T. Visser, M. Vastenburg, and D. Keyson. 2010. Snowglobe: the development of a prototype awareness system for longitudinal field studies. In Proc. 8th ACM Conference on Designing Interactive Systems, pp. 426–429. Copyright © 2010 ACM. Used with permission.

      Figure 8.5 (right) Video courtesy of Cosmin Munteanu & Albert Ali Salah. Used with permission.

      Figure 8.6 From: C. G. Pires, F. Pinto, V. D. Teixeira, J. Freitas, and M. S. Dias. 2012. Living home center–a personal assistant with multimodal interaction for elderly and mobility impaired e-inclusion. Computational Processing of the Portuguese Language: 10th International Conference, PROPOR 2012, Coimbra, Portugal, April 17–20, 2012, Proceedings. Copyright © 2012 Springer-Verlag Berlin Heidelberg. Used with permission.

      Figure 8.7 From: Morency, L.P., Stratou, G., DeVault, D., Hartholt, A., Lhommet, M., Lucas, G.M., Morbini, F., Georgila, K., Scherer, S., Gratch, J. and Marsella, S., 2015, January. SimSensei Demonstration: A Perceptive Virtual Human Interviewer for Healthcare Applications. AAAI Conference on Artificial Intelligence, (pp. 4307–4308). Copyright © 2015 AAAI Press. Used with permission.

      Figure 8.7 (video) Courtesy of USC Institute for Creative Technologies. Principal Investigators: Albert (Skip) Rizzo and Louis-Philippe Morency.

      Figure 8.8 From: F. Ferreira, N. Almeida, A. F. Rosa, A. Oliveira, J. Casimiro, S. Silva, and A. Teixeira. 2014. Elderly centered design for interaction–the case of the s4s medication assistant. Procedia Computer Science, 27: 398–408. Copyright © 2014 Elsevier. Used with permission.

      Figure 8.9 Courtesy of Jocelyn Ford.

      Figure 8.11

Скачать книгу