The Handbook of Speech Perception. Группа авторов

Чтение книги онлайн.

Читать онлайн книгу The Handbook of Speech Perception - Группа авторов страница 54

The Handbook of Speech Perception - Группа авторов

Скачать книгу

celery strawberry airplane 1 0.94 0.44 0.44 boat – 1 0.41 0.41 celery – – 1 0.67 strawberry – – – 1

      To summarize, we have only begun to scratch the surface of how linguistic meaning is represented in the brain. But figuring out what the brain is doing when it is interpreting speech is so important, and mysterious, that we have tried to illustrate a few recent innovations in enough detail that the reader may begin to imagine how to go further. Embodied meaning, vector representations, and encoding models are not the only ways to study semantics in the brain. They do, however, benefit from engaging with other areas of neuroscience, touching for example on the homunculus map in the somatosensory cortex (Penfield & Boldrey, 1937). It is less clear, at the moment, how to extend these results from lexical to compositional semantics. A more complete neural understanding of pragmatics will also be needed. Much work remains to be done. Because spoken language combines both sound and meaning, a full account of speech comprehension should explain how meaning is coded by the brain. We hope that readers will feel inspired to contribute the next exciting chapters in this endeavor.

      By the time they reach these meaning‐representing levels of the brain, the waves of neural activity racing up the auditory pathway will have passed through at least a dozen anatomical processing stations, each composed of between a few hundreds of thousands to hundreds of millions of neurons, each of which is richly and reciprocally interconnected both internally and with the previous and the next levels in the processing hierarchy. We hope readers will share our sense of awe when we consider that it takes a spoken word only a modest fraction of a second to travel through this entire stunningly intricate network to be transformed from sound wave to meaning.

      Remember that the picture painted here of a feed‐forward hierarchical network that transforms acoustics to phonetics to semantics is a highly simplified one. It is well grounded in scientific evidence, but it is necessarily a rather selective telling of the story as we understand it to date. Recent years have been a particularly productive time in auditory neuroscience, as insights from animal research, human brain imaging, human patient data and ECoG studies, and artificial intelligence have begun to come together to provide the framework of understanding we have attempted to outline here. But many important details remain unknown, and, while we feel fairly confident that the insights and ideas presented here will stand the test of time, we must be aware that future work may not just complement and refine but even overturn some of the ideas that we currently put forward as our best approximations to the truth. One thing we are absolutely certain of, though, is that studying how human brains speak to each other will remain a profoundly rewarding intellectual pursuit for many years to come.

      1 Aloni, M., & Dekker, P. (2016). The Cambridge handbook of formal semantics. Cambridge: Cambridge University Press.

      2 Ballard, D. H., Hinton, G. E., & Sejnowski, T. J. (1983). Parallel visual computation. Nature, 306, 21–26.

      3  Baumann, S., Griffiths, T. D., Sun, L., et al. (2011). Orthogonal representation of sound dimensions in the primate midbrain. Nature Neuroscience, 14, 423–425.

      4 Belin, P., Zatorre, R. J., Lafaille, P., et al. (2000). Voice‐selective areas in human auditory cortex. Nature, 403, 309–312.

      5 Bizley, J. K., Walker, K. M., Silverman, B. W., et al. (2009). Interdependent encoding of pitch, timbre, and spatial location in auditory cortex. Journal of Neuroscience, 29, 2064–2075.

      6 Blakemore, S.‐J., Wolpert, D., & Frith, C. (2000). Why can’t you tickle yourself? NeuroReport, 11, R11–R16.

      7 Bogen, J. E., & Bogen, G. (1976). Wernicke’s region – where is it? Annals of the New York Academy of Sciences, 280, 834–843.

      8 Bouchard, K. E., Mesgarani, N., Johnson, K., & Chang, E. F. (2013). Functional organization of human sensorimotor cortex for speech articulation. Nature, 495, 327–332.

      9 Brosch, M., Selezneva, E., & Scheich, H. (2005). Nonauditory events of a behavioral procedure activate auditory cortex of highly trained monkeys. Journal of Neuroscience, 25, 6797–6806.

      10 Cheung, C., Hamilton, L. S., Johnson, K., & Chang, E. F. (2016). The auditory representation of speech sounds in human motor cortex. Elife, 5, e12577.

      11 Clements, G. N. (1985). The geometry of phonological features. Phonology, 2, 225–252.

      12 Clements, G. N. (1990). The role of the sonority cycle in core syllabification. Papers in Laboratory Phonology, 1, 283–333.

      13 Da Costa, S., van der Zwaag, W., Marques, J. P., et al. (2011). Human primary auditory cortex follows the shape of Heschl’s gyrus. Journal of Neuroscience, 31, 14067–14075.

      14 Daunizeau, J., David, O., & Stephan, K. E. (2011). Dynamic causal modelling: A critical review of the biophysical and statistical foundations. NeuroImage, 58, 312–322.

      15 Davis, M. H., & Johnsrude, I. S. (2003). Hierarchical processing in spoken language comprehension. Journal of Neuroscience, 23, 3423–3431.

      16 Dayan, P., Hinton, G. E., Neal, R. M., & Zemel, R. S. (1995). The Helmholtz machine. Neural Computation, 7, 889–904.

      17 Dean, I., Harper, N., &

Скачать книгу