The Science of Reading. Группа авторов
Чтение книги онлайн.
Читать онлайн книгу The Science of Reading - Группа авторов страница 33
Coltheart et al.’s treatment of the Weekes 1997 study is problematic. The DRC model did not accurately simulate the results of the study or others like it. The characterization of the findings as showing that length affects nonwords but not words was inaccurate. The explanation of a word‐nonword difference does not explain the difference between the two types of words. The narrow focus on Weekes’ 1997 experiment did not yield a deeper understanding of the relationship between lexicality and length. Coltheart et al.’s (2001) claim that the DRC model accounts for this relation is therefore unwarranted.
Semantic effects on word naming
Finally, we consider an important aspect of reading aloud that dual‐route models do not address at all: the role of semantic information. Although semantics plays no role in the dual‐route account of normal reading, behavioral and neuroimaging studies of skilled readers in English and other languages (e.g., Japanese, Chinese) indicate that semantic information is utilized in word naming (Evans et al., 2012; Taylor et al., 2015). In the triangle framework, the learner’s task is to find ways to generate pronunciations quickly and accurately. The solution involves developing an efficient division of labor between different parts of the triangle, orthography➔phonology and orthography➔semantics➔phonology (O➔S➔P). The exact contributions from these components are affected by characteristics of words, writing systems, and readers. In general, there is greater input from O➔S➔P for words that are difficult to pronounce via the orthography➔phonology computation. In English, those are lower frequency words with atypical spelling‐sound correspondences (Strain et al. 1995; Strain & Herdman, 1999). There is also greater semantic involvement in reading aloud in writing systems that are relatively deep, such as Chinese (Yang et al., 2009) and Japanese Kanji (Shibahara et al. 2003; Smith et al., 2021).
These results follow from properties of the mappings between codes and their impact on learning in connectionist networks (Plaut et al., 1996; Harm & Seidenberg, 2004). The division of labor account has provided the basis for investigations of the development of reading in typical and dyslexic readers (e.g., Siegelman et al., 2020; Snowling & Hayiou‐Thomas, 2006; Harm & Seidenberg, 1999), the brain bases of the orthography➔phonology and orthography➔semantics➔phonology computations (Frost et al., 2005), and individual differences in reliance on the pathways in skilled readers (Graves et al., 2014; Woollams, 2005). Harm and Seidenberg (2004) developed a complementary model of the division of labor in computing meaning from print.
Every finding that semantics is used in normal reading aloud is a disconfirmation of the dual‐route model. In that architecture, semantics cannot be accessed until after a word is recognized (i.e., its entry in the orthographic or phonological lexicon is contacted).Accounting for semantic effects that arise in generating pronunciations would require rethinking basic tenets of the approach.
Summary
Claims that the dual‐route model correctly reproduced basic behavioral phenomena were overstated, undermined by the inaccuracy of many simulations, the file‐drawer problem, and anomalous effects that could not be ameliorated. Researchers were unable to implement the two routes in a manner consistent with behavior. That is cause to focus on other approaches.
Coltheart et al. (2001) utilized a modeling methodology they termed “Old Cognitivism,” in which the number of phenomena that can be simulated is the major criterion for success. In a chapter for the first edition of this Handbook, Coltheart (2005) listed numerous empirical findings that DRC was said to simulate correctly. This accounting was overgenerous. In fact, the model did not correctly simulate basic phenomena concerning word and nonword pronunciation that motivated the approach or ones such as consistency effects that challenged it.
Hybrid Models
The DRC models were succeeded by a series of hybrid, “connectionist dual process” (CDP) models. These models replaced the grapheme‐phoneme correspondence rules with connectionist networks, but retained a separate lexical route (Perry et al., 2007, 2010; Ziegler et al., 2014). The work was guided by several precepts (Perry et al., 2007), including:
Continuity with previous dual‐route models. The authors assert the value of maintaining the approach but improving it.
Nested, incremental theorizing. Each model should account for the same phenomena as the previous one as well as new ones. Each model should bear a transparent relation to the ones it supersedes.
Emphasis on critical findings that can discriminate between theories.
Evaluating performance using a single “gold standard” study of a phenomenon.
Taking the number of phenomena a model simulates as the main criterion for success.
Regarding continuity, Perry et al. (2007) correctly noted the historical importance of the dual‐route theory and expressed the goal of extending the lineage. However, theories are mainly judged by criteria such as whether they address important questions, explain empirical phenomena in principled ways, and yield insights that advance understanding. Elements of previous models are worth retaining if they advance these goals, not merely out of allegiance. As we have observed, the dual‐route model’s defining assumptions are incompatible with basic behavioral phenomena. The value of maintaining continuity with the approach is therefore unclear.
In practice, the hybrid models are mainly noteworthy for replacing the GPC nonlexical route with networks that are variants of the connectionist models. The orthography➔phonology pathway includes layers of units with weights on connections between them adjusted by a connectionist learning procedure. As in the SM89 and later models, the orthography➔phonology network in Perry et al. (2007) generated correct pronunciations for almost all words (about 90% of those tested), including regular and exception words, as well as nonwords. It is therefore categorically unlike the nonlexical route in dual‐route models, which could not pronounce exceptions to the rules, necessitating a second, lexical route. With the extirpation of these rules, and an orthography➔phonology network that correctly reads 90% of all words, that rationale no longer applies. In fact, the lexical route plays little role in the hybrid models. Its main function is to fulfill the a priori commitment to continuity with DRC.
A better approach would be to determine if any empirical phenomena demand the inclusion of a lexical route in the hybrid model. In Perry et al.’s (2007) model, the lexical route is used to store the frequency of each word (estimated from norms), as a parameter on each lexical node. However, this is unnecessary. Word frequency effects arise in connectionist networks employing distributed representations because the number of times a word is presented affects the settings of the weights. The effects arise from learning and using words and are modulated by similarities across words. Thus, the effects are dynamical rather than reflecting a fixed parameter. Similarly, continuity between models can be assessed by examining whether they account for phenomena in the same way. In the DRC models, regularity and consistency effects were attributed to conflicting output from the lexical and nonlexical routes. The CDP models retain the lexical route, but not its role in producing these effects, which arise wholly within their connectionist network. The lexical route is a vestigial organ whose removal has little impact on performance.
The “nested incremental” modeling claim is puzzling. The idea is that each model should account for the same phenomena as a previous model, plus additional ones, with clear explanations for how the model was changed. However, the DRC models did not produce correct simulations of basic phenomena to build on. Moreover,