The Handbook of Multimodal-Multisensor Interfaces, Volume 1. Sharon Oviatt

Чтение книги онлайн.

Читать онлайн книгу The Handbook of Multimodal-Multisensor Interfaces, Volume 1 - Sharon Oviatt страница 5

Автор:
Жанр:
Серия:
Издательство:
The Handbook of Multimodal-Multisensor Interfaces, Volume 1 - Sharon Oviatt ACM Books

Скачать книгу

a more in-depth collection of handbook chapters on language and dialogue processing (e.g., 2 weeks) and (2) conducting the hands-on project (e.g., 4 weeks).

      For more tailored versions of a course on multimodal-multisensor interfaces, another approach would be to have students read the handbook chapters in relevant sections, and then follow up with more targeted and in-depth technical papers. For example, a course intended for a cognitive science audience might start by reading The Paradigm Shift to Multimodality in Contemporary Interfaces, followed by assigning chapters from the handbook sections on: (1) theory, user modeling, and common modality combinations; (2) multimodal processing of social and emotional information; and (3) multimodal processing of cognition and mental health status. Afterward, the course could teach students different computational and statistical analysis techniques related to these chapters, ideally through demonstration. Students then might be asked to conduct a hands-on project in which they apply one or more analysis methods to multimodal data to build user models or predict mental states. As a second example, a course intended for a computer science audience might also start by reading The Paradigm Shift to Multimodality in Contemporary Interfaces, followed by assigning chapters on: (1) prototyping and software tools; (2) multimodal signal processing and architectures; and (3) language and dialogue processing. Afterward, students might engage in a hands-on project in which they design, build, and evaluate the performance of a multimodal system.

      In all of these teaching scenarios, we anticipate that professors will find this handbook to be a particularly comprehensive and valuable current resource for teaching about multimodal-multisensor interfaces.

       Acknowledgments

      In the present age, reviewers are one of the most precious commodities on earth. First and foremost, we’d like to thank our dedicated expert reviewers, who provided insightful comments on the chapters and their revisions, sometimes on short notice. This select group included Antonis Argyros (University of Crete, Greece), Vassilis Athitsos (University of Texas at Arlington, USA), Randall Davis (MIT, USA), Anthony Jameson (DFKI, Germany), Michael Johnston (Interactions Corp., USA), Elsa Andrea Kirchner (DFKI, Germany), Stefan Kopp (Bielefeld University, Germany), Marieke Longchamp (Laboratoire de Neurosciences Cognitive, France), Diane Pawluk (Virginia Commonwealth University, USA), Hesam Sagha (University of Passau, Germany), Gabriel Skantze (KTH Royal Institute of Technology, Sweden), and the handbook’s main editors.

      We’d also like to thank the handbook’s eminent advisory board, 12 people who provided valuable guidance throughout the project, including suggestions for chapter topics, assistance with expert reviewing, participation on the panel of experts in our challenge topic discussions, and valuable advice. Advisory board members included Samy Bengio (Google, USA), James Crowley (INRIA, France), Marc Ernst (Bielefeld University, Germany), Anthony Jameson (DFKI, Germany), Stefan Kopp (Bielefeld University, Germany), András Lõrincz (ELTE, Hungary), Kenji Mase (Nagoya University, Japan), Fabio Pianesi (FBK, Italy), Steve Renals (University of Edinburgh, UK), Arun Ross (Michigan State University, USA), David Traum (USC, USA), Wolfgang Wahlster (DFKI, Germany), and Alex Waibel (CMU, USA).

      We all know publications have been a rapidly changing field, and in many cases authors and editors no longer receive the generous support they once did. We’d like to warmly thank Diane Cerra, our Morgan & Claypool publications manager, for her amazing skillfulness, flexibility, and delightful good nature throughout all stages of this project. It’s hard to imagine having a more experienced publications advisor and friend, and for a large project like this one it was invaluable. Thanks also to Mike Morgan, President of Morgan & Claypool, for his support on all aspects of this project. Finally, thanks to Tamer Ozsu and Michel Beaudouin-Lafon of ACM Books for their advice and support.

      Many colleagues around the world graciously provided assistance in large and small ways—content insights, copies of graphics, critical references, and other valuable information used to document and illustrate this book. Thanks to all who offered their assistance, which greatly enriched this multi-volume handbook. For financial and professional support, we’d like to thank DFKI in Germany and Incaa Designs, an independent 501(c)(3) nonprofit organization in the US. In addition, Björn Schuller would like to acknowledge support from the European Horizon 2020 Research & Innovation Action ARIA-VALUSPA (agreement no. 645378).

       Figure Credits

      Figure 1.1 From: S. L. Oviatt, R. Lunsford, and R. Coulston. 2005. Individual differences in multimodal integration patterns: What are they and why do they exist? In Proc. of the Conference on Human Factors in Computing Systems [CHI ’05], CHI Letters. pp. 241–249. Copyright© 2005 ACM. Used with permission.

      Figure 1.2 From: S. Oviatt and P. Cohen. 2015. The Paradigm Shift to Multimodality in Contemporary Computer Interfaces. Morgan Claypool Synthesis Series. San Rafael, CA. Copyright © 2015 Morgan & Claypool Publishers. Used with permission.

      Figure 1.3 From: M. Ernst and H. Bulthoff. 2004. Merging the senses into a robust percept. Trends in Cognitive Sciences, 8(4):162–169. Copyright © 2004 Elsevier Ltd. Used with permission.

      Figure 1.4 (left) From: S. Oviatt, A. Cohen, A. Miller, K. Hodge, and A. Mann. 2012b. The impact of interface affordances on human ideation, problem solving and inferential reasoning. In Transactions on Computer Human Interaction. Copyright© 2012 ACM. Used with permission.

      Figure 2.1 From: B. E. Stein, T. R. Stanford, and B. A. Rowland. 2014. Development of multisensory integration from the perspective of the individual neuron. Nature Reviews Neuroscience, 15(8):520–535. Copyright © 2014 Macmillan Publishers Ltd. Used with permission.

      Figure 2.2 From: K. H. James, G. K. Humphrey, and M. A. Goodale. 2001. Manipulating and recognizing virtual objects: where the action Is. Canadian Journal of Experimental Psychology, 55(2):111–120. Copyright © 2001 Canadian Psychological Association. Used with permission.

      Figure 2.3 From: The Richard D. Walk papers, courtesy Drs. Nicholas and Dorothy Cummings Center for the History of Psychology, The University of Akron.

      Figure 2.4 From: A. F. Pereira, K. H. James, S. S. Jones, and L. B. Smith. 2010. Early biases and developmental changes in self-generated object views. Journal of Vision, 10(11):22:1–13. Copyright © 2010 Association for Research in Vision and Ophthalmology. Used with permission.

      Figure 2.6 From: A.J. Butler and K. H. James. 2013. Active learning of novel sound-producing objects: motor reactivism and enhancement of visuo-motor connectivity. Journal of Cognitive Neuroscience, 25(2):203–218. Copyright © 2013 Massachusetts Institute of Technology.

      Figure 2.7 From: A.J. Butler and K. H. James. 2013. Active learning of novel sound-producing objects: motor reactivism and enhancement of visuo-motor connectivity. Journal of Cognitive Neuroscience, 25(2):203–218. Copyright © 2013 Massachusetts Institute of Technology.

      Figure 2.8 From: K. H. James and I. Gauthier. 2006. Letter processing automatically recruits a sensorymotor brain network. Neuropsychologia, 44(14):2937–2949. Copyright © 2006 Elsevier Ltd. Used with permission.

      Figure 2.9 From: K. H. James and T. Atwood. 2009. The role of sensorimotor learning in the perception of letter-like forms: Tracking the causes of neural specialization for letters. Cognitive Neuropsychology, 26(1):91–110. Copyright © 2009 Taylor & Francis. Used with permission.

      Figure 2.10 From: K. H. James and S. N. Swain. 2011. Only self-generated actions create sensori-motor systems in the developing brain. Developmental Science, 14(4):673–687.

Скачать книгу