The Handbook of Speech Perception. Группа авторов

Чтение книги онлайн.

Читать онлайн книгу The Handbook of Speech Perception - Группа авторов страница 39

The Handbook of Speech Perception - Группа авторов

Скачать книгу

Human Perception and Performance, 21, 1409–1426.

      74 Green, K. P., & Kuhl, P. K. (1989). The role of visual information in the processing of. Perception & Psychophysics, 45(1), 34–42.

      75 Green, K. P., & Kuhl, P. K. (1991). Integral processing of visual place and auditory voicing information during phonetic perception. Journal of Experimental Psychology: Human Perception and Performance, 17, 278–288.

      76 Green, K. P., Kuhl, P. K., Meltzoff, A. N., & Stevens, E. B. (1991). Integrating speech information across talkers, gender, and sensory modality: Female faces and male voices in the McGurk effect. Perception & Psychophysics, 50(6), 524–536.

      77 Green, K. P., & Miller, J. L. (1985). On the role of visual rate information in phonetic perception. Perception & Psychophysics, 38(3), 269–276.

      78 Green, K. P., & Norrix, L. W. (2001). Perception of /r/ and /l/ in a stop cluster: Evidence of cross‐modal context effects. Journal of Experimental Psychology: Human Perception and Performance, 27(1), 166–177.

      79 Hall, D. A., Fussell, C., & Summerfield, A. Q. (2005). Reading fluent speech from talking faces: Typical brain networks and individual differences. Journal of Cognitive Neuroscience, 17(6), 939–953.

      80 Han, Y., Goudbeek, M., Mos, M., & Swerts, M. (2018). Effects of modality and speaking style on Mandarin tone identification by non‐native listeners. Phonetica, 76(4), 263–286.

      81 Hardison, D. M (2005). Variability in bimodal spoken language processing by native and nonnative speakers of English: A closer look at effects of speech style. Speech Communication, 46, 73–93.

      82 Hazan, V., Sennema, A., Iba, M., & Faulkner, A. (2005). Effect of audiovisual perceptual training on the perception and production of consonants by Japanese learners of English. Speech Communication, 47(3), 360–378.

      83 Hertrich, I., Mathiak, K., Lutzenberger, W., & Ackermann, H. (2009). Time course of early audiovisual interactions during speech and nonspeech central auditory processing: A magnetoencephalography study. Journal of Cognitive Neuroscience, 21(2), 259–274.

      84 Hessler, D., Jonkers, R., Stowe, L., & Bastiaanse, R. (2013). The whole is more than the sum of its parts: Audiovisual processing of phonemes investigated with ERPs. Brain Language, 124, 213–224.

      85 Hickok, G. (2009). Eight problems for the mirror neuron theory of action understanding in monkeys and humans. Journal of Cognitive Neuroscience, 21(7), 1229–1243.

      86 Irwin, J., & DiBlasi, L. (2017). Audiovisual speech perception: A new approach and implications for clinical populations. Language and Linguistics Compass, 11(3), 77–91.

      87 Irwin, J. R., Frost, S. J., Mencl, W. E., et al. (2011). Functional activation for imitation of seen and heard speech. Journal of Neurolinguistics, 24(6), 611–618.

      88 Ito, T., Tiede, M., & Ostry, D. J. (2009). Somatosensory function in speech perception. Proceedings of the National Academy of Sciences of the United States of America, 106(4), 1245–1248.

      89 Jerger, S., Damian, M. F., Tye‐Murray, N., & Abdi, H. (2014). Children use visual speech to compensate for non‐intact auditory speech. Journal of Experimental Child Psychology, 126, 295–312.

      90 Jerger, S., Damian, M. F., Tye‐Murray, N., & Abdi, H. (2017). Children perceive speech onsets by ear and eye. Journal of Child Language, 44(1), 185–215.

      91 Jesse, A., & Bartoli, M. (2018). Learning to recognize unfamiliar talkers: Listeners rapidly form representations of facial dynamic signatures. Cognition, 176, 195–208.

      92 Jiang, J., Alwan, A., Keating, P., et al. (2002). On the relationship between facial movements, tongue movements, and speech acoustics. EURASIP Journal on Applied Signal Processing, 11, 1174–1178.

      93 Jiang, J., Auer, E. T., Alwan, A., et al. (2007). Similarity structure in visual speech perception and optical phonetic signals. Perception & Psychophysics, 69(7), 1070–1083.

      94 Katz, W. F., & Mehta, S. (2015). Visual feedback of tongue movement for novel speech sound learning. Frontiers in Human Neuroscience, 9, 612.

      95 Kawase, T., Sakamoto, S., Hori, Y., et al. (2009). Bimodal audio–visual training enhances auditory adaptation process. NeuroReport, 20, 1231–1234.

      96 Kim, J., & Davis, C. (2004). Investigating the audio–visual speech detection advantage. Speech Communication, 44(1), 19–30.

      97  Lachs, L., & Pisoni, D. B. (2004). Specification of cross‐modal source information in isolated kinematic displays of speech. Journal of the Acoustical Society of America, 116(1), 507–518.

      98 Lander, K. & Davies, R. (2008). Does face familiarity influence speechreadability? Quarterly Journal of Experimental Psychology, 61, 961–967.

      99 Lidestam, B., Moradi, S., Pettersson, R., & Ricklefs, T. (2014). Audiovisual training is better than auditory‐only training for auditory‐only speech‐in‐noise identification. Journal of the Acoustical Society of America, 136(2), EL142–147.

      100 Ma, W. J., Zhou, X., Ross, L. A., et al. (2009). Lip‐reading aids word recognition most in moderate noise: A Bayesian explanation using high‐dimensional feature space. PLOS ONE, 4(3), 1–14.

      101 Magnotti, J. F., & Beauchamp, M. S. (2017). A causal inference model explains perception of the McGurk effect and other incongruent audiovisual speech. PLOS Computational Biology, 2017( 13), e1005229.

      102 Massaro, D. W. (1987). Speech perception by ear and eye: A paradigm for psychological inquiry. Hillsdale, NJ: Lawrence Erlbaum.

      103 Massaro, D. W., Cohen, M. M., Gesi, A., et al. (1993). Bimodal speech perception: An examination across languages. Journal of Phonetics, 21, 445–478.

      104 Massaro, D. W., & Ferguson, E. L. (1993). Cognitive style and perception: The relationship between category width and speech perception, categorization, and discrimination. American Journal of Psychology, 106(1), 25–49.

      105 Massaro, D. W., Thompson, L. A., Barron, B., & Laron, E. (1986). Developmental changes in visual and auditory contributions to speech perception, Journal of Experimental Child Psychology, 41, 93–113.

      106 Matchin, W., Groulx, K., & Hickok, G. (2014). Audiovisual speech integration does not rely on the motor system: Evidence from articulatory suppression, the McGurk effect, and fMRI. Journal of Cognitive Neuroscience, 26(3), 606–620.

      107 McGurk, H., & MacDonald, J. (1976). Hearing lips and seeing voices. Nature, 264, 746–748.

      108 Ménard, L., Cathiard, M. A., Troille, E., & Giroux, M. (2015). Effects of congenital visual deprivation on the auditory perception of anticipatory labial coarticulation. Folia Phoniatrica et Logopaedica, 67(2), 83–89.

      109 Ménard, L., Dupont, S., Baum, S. R., & Aubin, J. (2009). Production and perception of French vowels by congenitally blind adults and sighted adults. Journal of the Acoustical Society of America, 126(3), 1406–1414.

      110 Ménard, L., Leclerc, A., & Tiede, M. (2014). Articulatory and acoustic correlates of contrastive focus in congenitally blind adults and sighted adults. Journal of Speech, Language, and Hearing Research, 57(3), 793–804.

      111 Ménard, L., Toupin, C., Baum, S. R., et al. (2013). Acoustic and articulatory analysis of French vowels produced

Скачать книгу