The Handbook of Speech Perception. Группа авторов

Чтение книги онлайн.

Читать онлайн книгу The Handbook of Speech Perception - Группа авторов страница 18

The Handbook of Speech Perception - Группа авторов

Скачать книгу

and he coauthored the textbook "Auditory Neuroscience".

      Diana Van Lancker Sidtis (formerly Van Lancker) is Professor Emeritus of Communicative Sciences and Disorders at New York University, where she served as Chair from 1999‐2002; Associate Director of the Brain and Behavior Laboratory at the Nathan Kline Institute, Orangeburg, NY; and a certified and licensed speech-language pathologist (from Cal State LA). Her education includes an MA from the University of Chicago, PhD from Brown University, and an NIH Postdoctoral Fellowship at Northwestern University. Dr. Sidtis has continued to mentor students and perform research in speech science, voice studies, and neurolinguistics. She is author of over 100 scientific papers and review chapters, and coauthor, with Jody Kreiman, of Foundations of Voice Studies, Wiley‐Blackwell. Her second book, Foundations of Familiar Language, is scheduled to appear in 2021.

      Matthias J. Sjerps received his Ph.D. in Cognitive Psychology at the Max Planck Institute for Psycholinguistics in Nijmegen, The Netherlands. He has held postdoc positions at the Max Planck Institute, The Radboud University of Nijmegen, and at the University of California at Berkeley. His main research line has been centered on the perception of speech sounds, with a specific focus on how listeners resolve variability in speech sounds. His work has been supported by grants from the European Committee (Marie curie grant) and Max Planck Gesellschaft. Some recent publications of this work have appeared in Nature Communications, Journal of Phonetics, Journal of Experimental Psychology: Human Perception and Performance. Since 2019 he is working as a researcher for the Dutch Inspectorate of Education, focusing on methods of risk‐assessment of schools and school‐boards.

      Rajka Smiljanic is Professor of Linguistics and Director of UT Sound Lab at the University of Texas at Austin. She received her Ph.D. from the Linguistics Department at the University of Illinois Urbana‐Champaign, after which she worked as a Research Associate in the Linguistics Department at Northwestern University. Her work is concentrated in the areas of experimental phonetics, cross-language and second language speech production and perception, clear speech, and intelligibility variation. Her recent work appeared in the Journal of the Acoustical Society of America, Journal of Speech, Language, and Hearing Research, and Journal of Phonetics. She was elected Fellow of the Acoustical Society of America in 2018 and is currently serving as a Chair of the Speech Communication Technical Committee.

      Mitchell S. Sommers is Professor of Psychological and Brain Sciences at Washington University in St. Louis. He received his PhD in Psychology from the University of Michigan and worked as a postdoctoral Fellow at Indiana University. His work focuses on changes in hearing and speech perception in older adults and individuals with dementia of the Alzheimer’s type. His work has been published in Ear & Hearing, Journal of the Acoustical Society of America, and Journal of Memory and Language, among others. He received a career development award from the Brookdale Foundation and his work has been supported by NIH, NSF, and the Pfeifer Foundation.

      Christina Y. Tzeng is Assistant Professor of Psychology at San José State University. She received her Ph.D. in Psychology from Emory University in 2016. Her research explores the cognitive mechanisms that underlie perceptual learning of variation in spoken language and has been supported by the American Psychological Association. She has published her research in journals such as Cognitive Science, Journal of Experimental Psychology: Human Perception and Performance, and Psychonomic Bulletin and Review.

      Michael S. Vitevitch is Professor of Psychology and Director of the Spoken Language Laboratory at the University of Kansas. He received his Ph.D. in Cognitive Psychology from the University at Buffalo in 1997, and was an NIH post‐doctoral trainee at Indiana University before taking an academic position at the University of Kansas in 2001. His research uses speech errors, auditory illusions, and the mathematical tools of network science to examine the processes and representations that are involved in the perception and production of spoken language. His work has been supported by grants from the National Institutes of Health, and has been published in Psychology journals such as Journal of Experimental Psychology: General, Cognitive Psychology, and Psychological Science, as well as journals in other disciplines such as Journal of Speech, Language, and Hearing Research, and Entropy.

      Seung Yun Yang, Ph.D., CCC‐SLP, is an assistant professor in the department of Communication, Arts, Sciences, and Disorders. She is also a member of the Brain and Behavior Laboratory at the Nathan Kline Institute for Psychiatric Research in Orangeburg, New York. She received her doctorate from the Department of Communicative Sciences and Disorders at New York University. Her research primarily focuses on understanding the neural bases of nonliteral language and on understanding how prosody is conveyed and understood in the context of spoken language. Her research works have been published in peer‐reviewed journals such as Journal of American Speech, Language, and Hearing Research and Clinical Linguistics & Phonetics.

      Romi Zäske is a researcher at the University Hospital Jena and the Friedrich Schiller University of Jena, Germany. She received her Ph.D. from the Friedrich Schiller University of Jena in 2010, and has conducted research projects at Glasgow University, UK, and at the University of Bern, Switzerland. Her research centers on the cognitive and neuronal mechanisms subserving human voice perception and memory, including individual differences, and has been supported by grants of the Deutsche Forschungsgemeinschaft (DFG). Some recent publications of her work have appeared in Royal Society Open Science, Behavior Research Methods, Attention, Perception, & Psychophysics, Cortex, and Journal of Neuroscience.

      Two remarkable developments have taken hold since the publication of the first edition of The Handbook of Speech Perception in 2006. The first is directly connected to the study of speech perception and stands as a testament to the maturity and vitality of this relatively new field of research. The second, though removed from the study of speech perception, provides a timely pointer to the central theme of this book. Both of these developments are so overbearing that they simply cannot go without notice as I write this preface in the last quarter of 2020. They also help us to see how the complex landscape of speech perception research intersects with some of the most challenging and exciting scientific frontiers of our time.

      The first of these developments is the appearance of virtual assistants such as Apple’s Siri, Amazon’s Alexa, and Microsoft’s Cortana. While it is a well‐worn cliché to mark time by technological developments, the rapid adoption of these speech technologies over the past decade is hard to ignore when thinking about speech communication. The domain of speech perception now includes both humans and machines as both talkers and listeners. What exactly does machine speech recognition have to do with the body of research presented in the chapters of this handbook, all of which address human speech perception? These speaker‐hearer machines certainly do not perceive speech in a human‐like way; Siri, Alexa, and Cortana do not sense speech as do human ears and eyes, their machine learning algorithms do not result in neurocognitive representations of linguistic properties, and they are not participants in the relationships and social meanings encoded in the indexical properties of speech. In his preface to the first edition of this handbook, Michael Studdert‐Kennedy noted that “alphabetic writing and reading have no independent biological base; they are, at least in origin, parasitic on spoken language.” Studdert‐Kennedy went on to suggest that, “speech production and perception, writing and reading, form an intricate biocultural nexus” (my italics). With the invention of virtual assistants, spoken language once again participates in a symbiotic relationship with a new medium of verbal communication. Within this complex and evolving ecology of spoken–written–digital language, the study of human speech perception continues to reveal, in increasing detail, the contours of this biocultural nexus. Immersion into

Скачать книгу