The Handbook of Multimodal-Multisensor Interfaces, Volume 1. Sharon Oviatt

Чтение книги онлайн.

Читать онлайн книгу The Handbook of Multimodal-Multisensor Interfaces, Volume 1 - Sharon Oviatt страница 8

Автор:
Жанр:
Серия:
Издательство:
The Handbook of Multimodal-Multisensor Interfaces, Volume 1 - Sharon Oviatt ACM Books

Скачать книгу

      Figure 8.12 Courtesy of © 2016 ANSA.

      Figure 8.12 (video) Courtesy of Robot-Era Project, The BioRobotics Institute, Scuola Superiore Sant’Anna, Italy.

      Figure 8.13 From: R. Shilkrot, J. Huber, W. Meng Ee, P. Maes, and S. C. Nanayakkara. 2015. Fingerreader: a wearable device to explore printed text on the go. In ACM Transactions on Computer-Human Interaction, pp. 2363–2372. Copyright © 2015 ACM. Used with permission.

      Figure 8.14 Adapted from: B. Görer, A. A. Salah, and H. L. Akin. 2016. An autonomous robotic exercise tutor for elderly people. Autonomous Robots.

      Figure 8.14 (video) Courtesy of Binnur Görer.

      Figure 8.15 From: B. Görer, A. A. Salah, and H. L. Akin. 2016. An autonomous robotic exercise tutor for elderly people. Autonomous Robots. Copyright © 2016 Springer Science+Business Media New York. Used with permission.

      Figure 9.3 From: P. Qvarfordt and S. Zhai. 2009. Gaze-aided human-computer and human-human dialogue. In B. Whitworth and A. de Moo, eds., Handbook of Research on Socio-Technical Design and Social Networking Systems, chapter 35, pp. 529–543. Copyright © 2009 IGI Global. Reprinted by permission of the copyright holder.

      Figure 9.4 From: P. Qvarfordt and S. Zhai. 2009. Gaze-aided human-computer and humanhuman dialogue. In B. Whitworth and A. de Moo, eds., Handbook of Research on Socio-Technical Design and Social Networking Systems, chapter 35, pp. 529–543. Copyright © 2009 IGI Global. Reprinted by permission of the copyright holder.

      Figure 9.7 From: P. Qvarfordt and S. Zhai. 2005. Conversing with the user based on eye-gaze patterns. In Proc. of the SIGCHI Conf. on Human Factors in Computing Systems (CHI ’05), pp. 221–230. Copyright © 2005 ACM. Used with permission.

      Figure 10.1 From: S. Oviatt, R. Lunsford, and R. Coulston. 2005. Individual differences in multimodal integration patterns: What are they and why do they exist? In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI), pp. 241–249. Copyright © 2005 ACM. Used with permission.

      Figure 10.5 (left) From: P. R. Cohen, M. Johnston, D. R. McGee, S. L. Oviatt, J. Pittman, I. Smith, L. Chen, and J. Clow. 1997. QuickSet: Multimodal interaction for distributed applications. In Proceedings of the Fifth ACM International Conference on Multimedia, pp. 31–40. Copyright © 1997 ACM. Used with permission.

      Figure 10.6 Video courtesy of Phil Cohen. Used with permission.

      Figure 10.7 From: P. R. Cohen, D. R. McGee, and J. Clow. 2000. The efficiency of multimodal interaction for a map-based task. In Proceedings of the Sixth Conference on Applied Natural Language Processing, Association for Computational Linguistics, pp. 331–338. Copyright © 2000 Association for Computational Linguistics. Used with permission.

      Figure 10.7 (video) Video courtesy of Phil Cohen. Used with permission.

      Figure 10.8 Video courtesy of Phil Cohen. Used with permission.

      Figure 10.9 From: D. R. McGee, P. R. Cohen, M. Wesson, and S. Horman. 2002. Comparing paper and tangible, multimodal tools. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI), pp. 407–414. Copyright © 2002 ACM. Used with permission.

      Figure 10.10 Video courtesy of Phil Cohen. Used with permission.

      Figure 10.11 From: P. Ehlen and M. Johnston. 2012. Multimodal interaction patterns in mobile local search. In Proceedings of ACM Conference on Intelligent User Interfaces, pp. 21–24. Copyright © 2012 ACM. Used with permission.

      Figure 10.12 Based on: M. Johnston, J. Chen, P. Ehlen, H. Jung, J. Lieske, A. Reddy, E. Selfridge, S. Stoyanchev, B. Vasilieff, and J. Wilpon. 2014. MVA: The Multimodal Virtual Assistant. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), Association for Computational Linguistics, pp. 257–259.

      Figure 10.13 Based on: Wahlster, Wolfgang (2002): SmartKom: Fusion and Fission of Speech, Gestures, and Facial Expressions. In Proc. of the 1st International Workshop on Man-Machine Symbiotic Systems. Kyoto, Japan, pp. 213–225. Used with permission.

      Figure 10.14 Courtesy of openstream.com. Used with permission.

      Figure 10.15 (right) From: P. R. Cohen, E C. Kaiser, M. C. Buchanan, S. Lind, M. J. Corrigan, and R. M. Wesson. 2015. Sketch-thru-plan: a multimodal interface for command and control. Communications of the ACM, 58(4):56–65. Copyright © 2015 ACM. Used with permission.

      Figure 10.16 From: P. R. Cohen, E C. Kaiser, M. C. Buchanan, S. Lind, M. J. Corrigan, and R. M. Wesson. 2015. Sketch-thru-plan: a multimodal interface for command and control. Communications of the ACM, 58(4):56–65. Copyright © 2015 ACM. Used with permission.

      Figure 11.1 From: R. A. Bolt. 1980. Put-that-there: Voice and gesture at the graphics interface. ACM SIGGRAPH Computer Graphics, 14(3): 262–270. Copyright © 1980 ACM. Used with permission.

      Figure 11.1 (video) Courtesy of Chris Schmandt, MIT Media Lab Speech Interface group.

      Figure 11.3 From: P. Maragos, V. Pitsikalis, A. Katsamanis, G. Pavlakos, and S. Theodorakis. 2016. On shape recognition and language. In M. Breuss, A. Bruckstein, P. Maragos, and S. Wuhrer, eds., Perspectives in Shape Analysis. Springer. Copyright© 2016 Springer International Publishing Switzerland. Used with permission.

      Figure 11.4a (video) Courtesy of Botsquare.

      Figure 11.4b (video) Courtesy of Leap Motion.

      Figure 11.5 Based on: Krahnstoever, S. Kettebekov, M. Yeasin, and R. Sharma. 2002. A real-time framework for natural multimodal interaction with large screen displays. In Proceedings of the International Conference on Multimodal Interfaces, p. 349.

      Figure 11.6 Based on: L. Pigou, S. Dieleman, P.-J. Kindermans, and B. Schrauwen. 2015. Sign language recognition using convolutional neural networks. In L. Agapito, M. M. Bronstein, and C. Rother, eds., Computer Vision—ECCV 2014 Workshops, volume LNCS 8925, pp. 572–578.

      Figure 11.7 Based on: N. Neverova, C. Wolf, G. W. Taylor, and F. Nebout. 2015. Multi-scale deep learning for gesture detection and localization. In L. Agapito, M. M. Bronstein, and C. Rother, editors, Computer Vision—ECCV 2014 Workshops, volume LNCS 8925, pp. 474–490.

      Figure 11.8 Based on: D. Yu and L. Deng. 2011. Deep learning and its applications to signal and information processing [exploratory DSP]. IEEE Signal Processing Magazine, 28(1): 145–154.

      Figure 11.11 From: Pavlakos, S. Theodorakis, V. Pitsikalis, A. Katsamanis, and P. Maragos. 2014. Kinect-based multimodal gesture recognition using a two-pass fusion scheme. In Proceedings of the International Conference on Image Processing, pp. 1495–1499. Copyright © 2014 IEEE. Used with permission.

      Figure 11.11 (video) Courtesy of Stavros Theodorakis. Used with permission.

Скачать книгу