The Handbook of Speech Perception. Группа авторов

Чтение книги онлайн.

Читать онлайн книгу The Handbook of Speech Perception - Группа авторов страница 40

The Handbook of Speech Perception - Группа авторов

Скачать книгу

B. T., & D’Esposito, M. (2005). Searching for “the top” in top‐down control. Neuron, 48(4), 535–538.

      113 Miller, R., Sanchez, K., & Rosenblum, L. (2010). Alignment to visual speech information. Attention, Perception, & Psychophysics, 72(6), 1614–1625.

      114 Mitterer, H., & Reinisch, E. (2017). Visual speech influences speech perception immediately but not automatically. Attention, Perception, & Psychophysics, 79(2), 660–678.

      115 Moradi, S., Lidestam, B., Ng, E. H. N., et al. (2019). Perceptual doping: An audiovisual facilitation effect on auditory speech processing, from phonetic feature extraction to sentence identification in noise. Ear and Hearing, 40(2), 312–327.

      116  Munhall, K. G., Ten Hove, M. W., Brammer, M., & Paré, M. (2009). Audiovisual integration of speech in a bistable illusion. Current Biology, 19(9), 735–739.

      117 Munhall, K. G., & Vatikiotis‐Bateson, E. (2004). Spatial and temporal constraints on audiovisual speech perception. In G. A. Calvert, C. Spence, & B. E. Stein (Eds), Handbook of multisensory processes (pp. 177–188) Cambridge, MA: MIT Press.

      118 Münte, T. F., Stadler, J., Tempelmann, C., & Szycik, G. R. (2012). Examining the McGurk illusion using high‐field 7 Tesla functional MRI. Frontiers in Human Neuroscience, 6, 95.

      119 Musacchia, G., Sams, M., Nicol, T., & Kraus, N. (2006). Seeing speech affects acoustic information processing in the human brainstem. Experimental Brain Research, 168(1–2), 1–10.

      120 Namasivayam, A. K., Wong, W. Y. S., Sharma, D., & van Lieshout, P. (2015). Visual speech gestures modulate efferent auditory system. Journal of Integrative Neuroscience, 14(1), 73– 83.

      121 Nath, A. R., & Beauchamp M. S. (2012). A neural basis for interindividual differences in the McGurk effect: A multisensory speech illusion. NeuroImage, 59(1), 781–787. PubMed.

      122 Navarra, J., & Soto‐Faraco, S. (2007). Hearing lips in a second language: Visual articulatory information enables the perception of second language sounds. Psychological Research, 71, 4–12.

      123 Nishitani, N., & Hari, R. (2002). Viewing lip forms: Cortical dynamics. Neuron, 36(6), 1211–1220.

      124 Nygaard, L. C. (2005). The integration of linguistic and non‐linguistic properties of speech. In D. Pisoni & R. Remez (Eds), Handbook of speech perception (pp. 390–414). Oxford: Blackwell.

      125 Olson, I. R., Gatenby, J., & Gore, J. C. (2002). A comparison of bound and unbound audio–visual information processing in the human cerebral cortex. Cognitive Brain Research, 14, 129–138.

      126 Ostrand, R., Blumstein, S. E., Ferreira, V. S., & Morgan, J. L. (2016). What you see isn’t always what you get: Auditory word signals trump consciously perceived words in lexical access. Cognition, 151, 96–107.

      127 Palmer, T. D., & Ramsey, A. K. (2012). The function of consciousness in multisensory integration. Cognition, 125(3), 353–364.

      128 Papale, P., Chiesi, L., Rampinini, A. C., et al. (2016). When neuroscience “touches” architecture: From hapticity to a supramodal functioning of the human brain. Frontiers in Psychology, 7(631), 866.

      129 Pardo, J. S. (2006). On phonetic convergence during conversational interaction. Journal of the Acoustical Society of America, 119(4), 2382–2393.

      130 Pardo, J. S., Gibbons, R., Suppes, A., & Krauss, R. M. (2012). Phonetic convergence in college roommates. Journal of Phonetics, 40(1), 190–197.

      131 Pardo, J. S., Jordan, K., Mallari, R., et al. (2013). Phonetic convergence in shadowed speech: The relation between acoustic and perceptual measures. Journal of Memory and Language, 69(3), 183–195.

      132 Pardo, J. S., Urmanche, A., Wilman, S., & Wiener, J. (2017). Phonetic convergence across multiple measures and model talkers. Attention, Perception, & Psychophysics, 79(2), 637–659.

      133 Pascual‐Leone, A., & Hamilton, R. (2001). The metamodal organization of the brain. Progress in Brain Research, 134, 427–445.

      134 Paulesu, E., Perani, D., Blasi, V., et al. (2003). A functional‐anatomical model for lipreading. Journal of Neurophysiology, 90(3), 2005–2013.

      135 Pekkola, J., Ojanen, V., Autti, T., et al. (2005). Primary auditory cortex activation by visual speech: An fMRI study at 3 T. Neuroreport, 16(2), 125–128.

      136 Pilling, M., & Thomas, S. (2011). Audiovisual cues and perceptual learning of spectrally distorted speech. Language and Speech, 54(4), 487–497.

      137 Plass, J., Guzman‐Martinez, E., Ortega, L., et al. (2014). Lip reading without awareness. Psychological Science, 25(9), 1835–1837.

      138 Reich, L., Maidenbaum, S., & Amedi, A. (2012). The brain as a flexible task machine: Implications for visual rehabilitation using noninvasive vs. invasive approaches. Current Opinion in Neurobiology, 25(1), 86–95.

      139 Reisberg, D., McLean, J., & Goldfield, A. (1987). Easy to hear but hard to understand: A lip‐reading advantage with intact auditory stimuli. In B. Dodd & R. Campbell (Eds), Hearing by eye: The psychology of lip‐reading (pp. 97–113). Hillsdale, NJ: Lawrence Erlbaum.

      140 Remez, R. E., Beltrone, L. H., & Willimetz, A. A. (2017). Effects of intrinsic temporal distortion on the multimodal perceptual organization of speech. Paper presented at the 58th Annual Meeting of the Psychonomic Society, Vancouver, British Columbia, November.

      141 Remez, R. E., Fellowes, J. M., & Rubin, P. E. (1997). Talker identification based on phonetic information. Journal of Experimental Psychology: Human Perception and Performance, 23(3), 651–666.

      142 Ricciardi, E., Bonino, D., Pellegrini, S., & Pietrini, P. (2014). Mind the blind brain to understand the sighted one! Is there a supramodal cortical functional architecture? Neuroscience & Biobehavioral Reviews, 41, 64–77.

      143 Riedel, P., Ragert, P., Schelinski, S., et al. (2015). Visual face‐movement sensitive cortex is relevant for auditory‐only speech recognition. Cortex, 68, 86–99.

      144 Rosen, S. M., Fourcin, A. J., & Moore, B. C. (1981). Voice pitch as an aid to lipreading. Nature, 291(5811), 150.

      145 Rosenblum, L. D. (2005). Primacy of multimodal speech perception. In D. Pisoni & R. Remez (Eds), Handbook of speech perception (pp. 51–78). Oxford: Blackwell.

      146 Rosenblum, L. D. (2008). Speech perception as a multimodal phenomenon. Current Directions in Psychological Science, 17(6), 405–409.

      147 Rosenblum, L. D. (2013). A confederacy of senses. Scientific American, 308, 72–75.

      148 Rosenblum, L. D. (2019). Audiovisual speech perception and the McGurk effect. In Oxford research encyclopedia of linguistics. https://oxfordre.com/linguistics/view/10.1093/acrefore/9780199384655.001.0001/acrefore‐9780199384655‐e‐420?rskey=L7JvON&result=1

      149 Rosenblum, L. D., Dias, J. W., & Dorsi, J. (2017). The supramodal brain: Implications for auditory perception. Journal of Cognitive Psychology, 5911, 1–23.

      150 Rosenblum, L. D., Dorsi, J., & Dias, J. W. (2016). The impact and status of Carol Fowler’s supramodal theory

Скачать книгу