The Handbook of Speech Perception. Группа авторов

Чтение книги онлайн.

Читать онлайн книгу The Handbook of Speech Perception - Группа авторов страница 49

The Handbook of Speech Perception - Группа авторов

Скачать книгу

reflect noise at some level of the experiment or analysis, but it raises the intriguing possibility that the STG actually groups /j ɪ i ʉ v ð/ together, and thus does not strictly follow established phonetic conventions. Therefore, in addition to articulatory, acoustic, and auditory phonetics, studies such as this on the cortical response to speech may pave the way to innovative neural feature analyses. However, we would like to emphasize that these are early results in the field. The use of discrete segmental phonemes may, for example, be considered a useful first approximation to analyses using more complex, overlapping feature representations.

       Auditory phonetic representations in the sensorimotor cortex

      From the STG, we turn now to a second cortical area. The ventral sensorimotor cortex (vSMC) is better known for its role in speech production than in speech comprehension (Bouchard et al., 2013). This part of the cortex, near the ventral end of the SMC (see Figure 3.6), contains the primary motor and somatosensory areas, which send motor commands to and receive touch and proprioceptive information from the face, lips, jaw, tongue, velum, and pharynx. The vSMC plays a key role in controlling the muscles associated with these articulators, and is further involved in monitoring feedback from the sensory nerves in these areas when we speak. Less widely known is that the vSMC also plays a role in speech perception. We know, for example, that a network including frontal areas becomes more active when the conditions for perceiving speech become more difficult (Davis & Johnsrude, 2003), such as when there is background noise or the sound of multiple speakers overlaps (contrast easy listening conditions when distractions like these are absent). This context‐specific recruitment of speech‐production areas may signal that they play an auxiliary role in speech perception, by providing additional computational resources when the STG is overburdened. We might ask how the vSMC, as an auxiliary auditory system that is primarily dedicated to coordinating the articulation of speech, represents heard speech. Does the vSMC represent the modalities of overt and heard speech similarly or differently? Is the representation of heard speech in the vSMC similar or different from that of the STG?

      When Cheung et al. (2016) examined neural response patterns in the vSMC while subjects listened to recordings of speech, they found that, as in the STG, it was the manner‐of‐articulation features that took precedence. In other words, representations in vSMC were conditioned by task: during speech production the vSMC favored place‐of‐articulation features (Bouchard et al., 2013; Cheung et al., 2016), but during speech comprehension the vSMC favored manner‐of‐articulation features (Cheung et al., 2016). As we discussed earlier, the STG is also organized according to manner‐of‐articulation features when subjects listen to speech (Mesgarani et al., 2014). Therefore the representations in these two areas, STG and vSMC, appear to use a similar type of code when they represent heard speech.

Schematic illustration of feature-based representations in the human sensorimotor cortex.

      Source: Cheung et al., 2016. Licensed under CC BY 4.0.

Скачать книгу