The Science of Reading. Группа авторов
Чтение книги онлайн.
Читать онлайн книгу The Science of Reading - Группа авторов страница 51
Figure 4.4 The Triangle model
(Harm & Seidenberg, 2004/With permission of American Psychological Association)
showing full interactions between three types of information: orthography, phonology, and semantics.
The orthographic layer in Harm and Seidenberg’s (2004) model required 111 units to represent the various graphemes that can occur at different positions in monosyllabic English words. The phonological layer included 200 units to represent the phoneme features at various positions in words. Finally, the network included 1,989 semantic features (e.g., is a person, is a piece of furniture, involves movement). If the meaning of a word contained the feature, the unit was set to 1, otherwise it remained at 0.
The model was taught in two steps. First, training was limited to the correspondences between phonemes and semantic features, akin to native language acquisition. In a second stage, orthography was added, so that the connections from print to phonology and print to meaning could be learned, just as children learn to read after a number of years of spoken language experience.
Following training, Harm and Seidenberg (2004) noticed that their model produced the correct pronunciation of 99.2% of the words without requiring a route with word nodes (addressed phonology). The model further simulated all effects in visual word recognition captured by DRC (and later CDP+). The model was recently used successfully by Chang et al. (2020) to investigate the effects of spoken word knowledge and different reading instructions on word reading.
The Triangle model makes another interesting prediction. Because the three types of representation (orthography, phonology, semantics) fully interact, orthography activates phonology, but phonology also activates orthography. Learning to read should therefore bring about changes in how phonology is represented. This is consistent with evidence from illiterate populations. For example, Morais et al. (1987) reported that illiterate people perform less well than peers on phonological awareness tasks such as taking away the first sound of a spoken word. This finding and other data indicate that knowledge of the form of spoken words is less detailed and less stable in illiterate people, and people with reduced reading practice (Huettig et al., 2018). In proficient readers, how a word is spelled influences how its spoken form is perceived (e.g., Ziegler et al., 2008) and brain areas active in visual word processing are active in speech perception as well (Dehaene et al., 2015; Perre et al., 2009).
Because the Triangle model has direct connections between orthography and meaning, in addition to semantically mediated connections, it is consistent with a weak phonological theory (Frost, 1998). The direct connections between orthography and phonology embody assembled phonology in a way that is sensitive to the distributional properties of the writing system. The semantically mediated connections are the equivalent of the addressed route, albeit different. The difference with DRC and CPD+ is that this route in the Triangle model always includes meaning. To explain the observation that patients with semantic dementia can name words with inconsistent grapheme‐phoneme correspondences, Woollams et al. (2016) argued that the semantic network is not completely lost, merely deficient. The combined activation through the direct orthography‐phonology connections and the deficient semantically mediated connections can still result in the correct naming of inconsistent words when the meaning is no longer fully understood (Woollams et al., this volume).
Phonology, Reading, and Neuroscientific Findings
A further way to understand print‐phonology connections comes from neuroimaging (see Yeatman, this volume). Following a meta‐analysis of the relevant literature, Taylor et al. (2013) described two routes from print to sound. Visual word processing starts in an area on the border of the occipital and temporal lobes (named posterior fusiform and occipitotemporal cortex in Figure 4.5). This region extracts (abstract) letter information from the visual stimulus. From this area, the assembled phonology pathway goes upward to the parietal lobe (angular gyrus, inferior parietal cortex) and from there to the inferior frontal gyrus, which is involved in speech perception and speech production. This pathway is called the dorsal route. The pathway for addressed phonology goes forward into the temporal lobe. It includes brain regions that can be linked to an orthographic lexicon (anterior fusiform gyrus) and regions linked to extracting the meaning from words (anterior fusiform gyrus, middle temporal gyrus, and angular gyrus). This pathway is called the ventral route.
Figure 4.5 Brain areas involved in the activation of addressed and assembled phonology in reading
(Taylor et al., 2013 / With permission of American Psychological Association).
Figure 4.6 gives a wider summary of the brain regions involved in visual and auditory language processing. It makes a distinction between the area involved in orthographic processing (the posterior fusiform gyrus), the areas involved in phonological processing (the superior temporal gyrus, part of the angular gyrus, the supramarginal gyrus, the precentral area, part of the inferior prefrontal gyrus, and part of the insula), the areas involved in meaning (anterior fusiform gyrus, middle temporal gyrus, part of the angular gyrus, part of the inferior frontal gyrus, the middle frontal gyrus, and part of the insula), and an area involved in directing attention to the relevant information (superior parietal lobule). It also shows the involvement of noncerebral structures (basal ganglia, hippocampus, right hemisphere of the cerebellum) and the major white matter tracks between the cortical areas. All connections are bidirectional, going bottom‐up and top‐down.
Tan et al. (2005) compared brain activation during word naming in Chinese and alphabetic languages. In line with the fact that Chinese is a logographic language with less scope for assembled phonology, the authors reported different brain regions active in the dorsal route in Chinese word reading. In particular, the middle frontal gyrus seemed to be heavily involved. Further research will need to confirm these differences, especially as it is difficult to fully match stimuli and tasks across different languages (Liu et al., 2020; Zhao et al., 2017).
A further neuroscientific finding is that the reading system is largely lateralized to the hemisphere controlling speech production. For the majority of people this is the left hemisphere, although for some 10% of lefthanders it is the right hemisphere (Gerrits et al., 2019; van der Haegen et al., 2012). The likely reason for this organization is that the many interactions between orthography and phonology are hindered when the language centers are distributed over the two hemispheres of the brain (Cai et al., 2008).
Conclusions
In this chapter, I have reviewed extensive evidence that phonology plays a central role in skilled reading. This is even the case in groups with suboptimal access to phonological forms within spoken language (such as people born deaf and students learning a second language in school), and notably, deficits in phonological processing are associated with reading problems (dyslexia).
Alphabetic languages use letters to represent the sounds of spoken words and this introduces two ways for deriving phonology. First, letters can be recoded into sounds, a process seen when readers name new or meaningless letter strings, such as teel. This has been called assembled phonology. Second, addressed phonology captures the observation that a visual word can be recognized as a familiar visual stimulus associated with