The Handbook of Speech Perception. Группа авторов
Чтение книги онлайн.
Читать онлайн книгу The Handbook of Speech Perception - Группа авторов страница 56
61 Penfield, W., & Boldrey, E. (1937). Somatic motor and sensory representation in the cerebral cortex of man as studied by electrical stimulation. Brain, 60, 389–443.
62 Price, C. J. (2012). A review and synthesis of the first 20years of PET and fMRI studies of heard speech, spoken language and reading. NeuroImage, 62, 816–847.
63 Prothero, J. W., & Sundsten, J. W. (1984). Folding of the cerebral cortex in mammals. Brain, Behavior and Evolution, 24, 152–167.
64 Pulvermüller, F. (2005). Brain mechanisms linking language and action. Nature Reviews Neuroscience, 6, 576–582.
65 Rabinowitz, N. C., Willmore, B. D. B., King, A. J., & Schnupp, J. W. H. (2013). Constructing noise‐invariant representations of sound in the auditory pathway. PLOS biology, 11, e1001710.
66 Rao, R. P., & Ballard, D. H. (1999). Predictive coding in the visual cortex: A functional interpretation of some extra‐classical receptive‐field effects. Nature Neuroscience, 2, 79–87.
67 Rauschecker, J. P., & Scott, S. K. (2009). Maps and streams in the auditory cortex: Nonhuman primates illuminate human speech processing. Nature Neuroscience, 12, 718–724.
68 Saussure, F. (1989). Cours de linguistique générale. Wiesbaden: Otto Harrassowitz.
69 Schnupp, J. W., & Carr, C. E. (2009). On hearing with more than one ear: Lessons from evolution. Nature Neuroscience, 12, 692–697.
70 Schnupp, J. W. H., Garcia‐Lazaro, J. A., & Lesica, N. A. (2015). Periodotopy in the gerbil inferior colliculus: Local clustering rather than a gradient map. Frontiers in Neural Circuits, 9, 37.
71 Schreiner, C. E., & Langner, G. (1988). Periodicity coding in the inferior colliculus of the cat II: Topographical organization. Journal of Neurophysiology, 60, 1823–1840.
72 Scott, S. K., Blank, C. C., Rosen, S., & Wise, R. J. (2000). Identification of a pathway for intelligible speech in the left temporal lobe. Brain, 123, 2400–2406.
73 Stevens, K. N. (1960). Toward a model for speech recognition. Journal of the Acoustical Society of America, 32, 47–55.
74 Stevens, K. N. (2002). Toward a model for lexical access based on acoustic landmarks and distinctive features. Journal of the Acoustical Society of America, 111, 1872–1891.
75 Stuart, A., & Phillips, D. P. (1996). Word recognition in continuous and interrupted broadband noise by young normal‐hearing, older normal‐hearing, and presbyacusic listeners. Ear and Hearing, 17, 478–489.
76 Sumner, C. J., Lopez‐Poveda, E. A., O’Mard, L. P., & Meddis, R. (2002). A revised model of the inner‐hair cell and auditory‐nerve complex. Journal of the Acoustical Society of America, 111, 2178–2188.
77 Tremblay, P., & Dick, A. S. (2016). Broca and Wernicke are dead, or moving past the classic model of language neurobiology. Brain and Language, 162, 60–71.
78 Walker, K. M., Bizley, J. K., King, A. J., & Schnupp, J. W. (2011). Multiplexed and robust representations of sound features in auditory cortex. Journal of Neuroscience, 31, 14565–14576.
79 Wernicke, C. (1874). Der aphasische Symptomencomplex: eine psychologische Studie auf anatomischer Basis. Breslau: Max Cohn & Weigert.
80 Wicker, B., Keysers, C., Plailly, J., et al. (2003). Both of us disgusted in my insula: The common neural basis of seeing and feeling disgust. Neuron, 40, 655–664.
81 Willmore, B. D. B., Schoppe, O., King, A. J., et al. (2016). Incorporating midbrain adaptation to mean sound level improves models of auditory cortical processing. Journal of Neuroscience, 36, 280–289.
82 Wolpert, D. M., Ghahramani, Z., & Flanagan, J. R. (2001). Perspectives and problems in motor learning. Trends in Cognitive Sciences, 5, 487–494.
83 Zhang, X., & Carney, L. H. (2005). Analysis of models for the synapse between the inner hair cell and the auditory nerve. Journal of the Acoustical Society of America, 118, 1540–1553.
84 Zhang, X., Heinz, M. G., Bruce, I. C., & Carney, L. H. (2001). A phenomenological model for the responses of auditory‐nerve fibers: I. Nonlinear tuning with compression and suppression. Journal of the Acoustical Society of America, 109, 648–670.
85 Journal, M. S. A., Bruce, I. C., & Carney, L. H. (2014). Updated parameters and expanded simulation options for a model of the auditory periphery. Journal of the Acoustical Society of America, 135, 283–286.
4 Perceptual Control of Speech
K. G. MUNHALL1, ANJA‐XIAOXING CUI2, ELLEN O’DONOGHUE3, STEVEN LAMONTAGNE1, AND DAVID LUTES1
1 Queen’s University, Canada
2 University of British Columbia, Canada
3 University of Iowa, United States
There is broad agreement that the American socialite Florence Foster Jenkins was a terrible singer. Her voice was frequently off‐key and her vocal range did not match the pieces she performed. The mystery is how she could not have known this. However, many – including her depiction in the eponymous film directed by Stephen Frears – think it likely that she was unaware of how poorly she sang. The American mezzosoprano Marilyn Horne offered this explanation. “I would say that she maybe didn’t know. First of all, we can’t hear ourselves as others hear us. We have to go by a series of sensations. We have to feel where it is” (Huizenga, 2016). This story about Jenkins contains many of the key questions about the topic of this chapter, the perceptual control of speech. Like singing, speech is governed by a control system that requires sensory information about the effects of its actions, and the major source of this sensory feedback is the auditory system. However, the speech we hear is not what others hear and yet we are able to control our speech motor system in order to produce what others need or expect to hear. For both speech and singing, much is unknown about the auditory‐motor control system that accomplishes this. What role does hearing your voice play in error detection and correction? How does this auditory feedback processing differ from how others hear you? What role does hearing your voice play in learning to speak?
Human spoken language has traditionally been studied by two separate communities (Meyer, Huettig, & Levelt, 2016): those including the majority of contributors to this volume who study the perception of speech signals produced by others and those who study the production of the speech signal itself. It is the latter that is the focus of this chapter. More specifically, the chapter focuses on the processing of the rich sensory input accompanying talking, particularly hearing your own voice. As Marilyn Horne suggests, perceiving this auditory feedback is not the same as hearing others. Airborne speech sound certainly arrives at the speaker’s ear as it does at the ears of others, but for the speaker it is mixed with sound transmitted through the body (e.g. Békésy, 1949). A second difference between hearing yourself and hearing others is neural rather than physical. The generation of action in speech and other movements is accompanied by information about the motor commands that is transmitted from the motor system to other parts of the brain that might need to know about the movement. One consequence of this distribution of copies of motor commands is that the sensory processing of the effects of a movement is different from the processing of externally generated sensory information (see Bridgeman, 2007,