Designing & Teaching Learning Goals & Objectives. Robert J. Marzano

Чтение книги онлайн.

Читать онлайн книгу Designing & Teaching Learning Goals & Objectives - Robert J. Marzano страница 4

Designing & Teaching Learning Goals & Objectives - Robert J. Marzano Solutions

Скачать книгу

roots in organizational psychology. In their 1990 book A Theory of Goal Setting and Task Performance, Edwin Locke and Gary Latham provide an extensive history of goal-setting practice in the context of organizational theory. Although their research focus is exclusively on goal setting and performance in work settings, they note that much of the work-related goal theory can and should be extended to the field of education.

      Table 1.1 (page 5) displays much of the research on which the recommendations in this book are based.

      From the research reported in table 1.1, one can conclude that two important characteristics of learning goals are goal specificity and goal difficulty. Goal specificity refers to the degree to which goals are defined in terms of clear and distinct outcomes. Goal difficulty refers to the degree to which goals provide a challenge to students.

      Goal Specificity

      Learning goals provide a set of shared expectations among students, teachers, administrators, and the general public. As discussed previously, they can range from the very specific (for example, “Students will be able to list the Great Lakes”) to the very general (“Students will be able to write a well-formed essay”). The research strongly implies that the more specific the goals are, the better they are. That is, goals that are specific in nature are more strongly related to student achievement than goals that are not. For example, Mark Tubbs (1986) examined goal specificity in a meta-analysis of 48 studies in mostly organizational settings. He found an overall effect size of .50 for goal specificity, which supports the notion that more specific goals lead to higher achievement (see table 1.1).

      The terms meta-analysis and effect size might be familiar to some readers and unfamiliar to others. (These terms and their relationship are described in some depth in appendix B on page 119.) Briefly, meta-analysis is a research technique for quantitatively synthesizing a series of studies on the same topic. In this case, Tubbs (1986) synthesized the findings of forty-eight studies on goal specificity. Typically, meta-analytic studies report their findings in terms of effect sizes (see the ES column in table 1.1). An effect size tells you how many standard deviations larger (or smaller) the average score for a group of students who were exposed to a given strategy (in this case, highly specific goals) is than the average score for a group of students who were not exposed to a given strategy (in this case, nonspecific goals).

      In short, an effect size tells you how powerful a strategy is; the larger the effect size, the more the strategy will increase student learning. Effect sizes are typically small numbers. In fact, the average effect size of most classroom strategies is .4 (Hattie, 2009). However, small effect sizes can translate into big percentage gains. For example, a strategy with an effect size of .4 translates into a 16 percentile point gain. This means that a student scoring at the 50th percentile in a class that did not use that strategy would be predicted to rise to the 66th percentile after the strategy had been introduced. (See appendix B, page 119, for a detailed description of effect sizes and a chart that translates effect size numbers into percentile gains.)

      One of the more useful aspects of effect sizes is that they can be transformed into an expected percentile point gain (or loss) for the strategy under investigation. The effect size reported by Tubbs (1986) of .50 is associated with a 19 percentile point gain. Thus, taking the findings at face value, one could infer that an average student in a group of students who were provided with specific learning goals would be at the 69th percentile of a group of students who were exposed to very general learning goals. Another way of saying this is that a student at the 50th percentile in a class that used nonspecific goals (an average student in that group) would be predicted to rise to the 69th percentile if he or she were provided very specific learning goals. In short, goal specificity is an important element to consider when trying to enhance student achievement.

      In their 1990 meta-analysis of organizational studies, Locke and Latham found effect sizes that ranged from .42–.80 for specific instead of general goals (translating to a 16–29 percentile point gain). They argued that specific goals provide more concrete guidance for achievement that more general goals lack. A lack of concrete guidance creates ambiguity that students in school and laborers in the workforce simply have trouble translating into specific expected behaviors. Specific goals provide a clear direction for behavior and a clear indication of desired performance, and as such they serve as motivators.

      More recently, Steve Graham and Dolores Perin (2007) conducted a meta-analysis of achievement in writing. They found five studies relating to goal specificity. Examples of goal specificity used in their study included a clearly established purpose in a writing assignment and the specification of product expectations. They found an average effect size of .70 for goal specificity, which translates to a 26 percentile point gain. Accordingly, Graham and Perin (2007) concluded that “assigning product goals had a strong impact on writing quality” (p. 464), but warned that although their conclusion was based on high-quality studies, their findings were drawn from only five studies and so should be interpreted cautiously.

      Goal Difficulty

      Students will perceive learning goals as more or less difficult depending on their current state of knowledge, their beliefs about what causes achievement, and their perceptions of their own abilities. Studies indicate that students are most motivated by goals they perceive as difficult but not too difficult. For example, Tubbs (1986) found an average effect size of .82 for difficult versus easy goals (translating to a 29 percentile point gain). The Locke and Latham (1990) meta-analysis found effect sizes of .52–.82 for difficult goals (a 20–29 percentile point gain), noting that “performance leveled off or decreased only when the limits of ability were reached or when commitment to a highly difficult goal lapsed” (p. 706). Goal difficulty may also moderate or change the effect of feedback on student achievement. For example, Avraham Kluger and Angelo DeNisi (1996) found that feedback as an instructional strategy is more effective when learning goals are at the right level of difficulty—challenging, but not too difficult.

      In addition to their specificity and difficulty, learning goals vary in terms of their purposes and functions. Learning goals that emphasize mastery of content, or mastery goals, might enhance learning more than goals that specify attainment of a specific score, or performance goals. Noncognitive goals that involve students in cooperative tasks might have a unique effect of their own.

      Mastery vs. Performance Goals

      One well-investigated distinction regarding learning goals involves their overarching purpose; namely, mastery or performance. The first type, mastery goals, focuses on developing competence. The second type, performance goals, focuses on demonstrating competence by obtaining a specific score or grade (Kaplan, Middleton, Urdan, & Midgley, 2001).

      This distinction between mastery goals and performance goals is subtle but profound in its implications. Performance goals will typically include a desired score or grade. For example, the following would be considered performance goals:

      Students will obtain a grade of B or higher by the end of the grading period.

      All students will be determined

Скачать книгу