Formative Assessment & Standards-Based Grading. Robert J. Marzano

Чтение книги онлайн.

Читать онлайн книгу Formative Assessment & Standards-Based Grading - Robert J. Marzano страница 8

Formative Assessment & Standards-Based Grading - Robert J. Marzano Classroom Strategies

Скачать книгу

to specific topics within each subject area is growing in popularity. This is called standards-based grading, and many consider this method to be the most appropriate method of grading (for a discussion, see Brookhart & Nitko, 2007, p. 219). Where there is interest in this system, however, there is also quite a bit of poor practice on top of considerable confusion about its defining characteristics.

      As described in Marzano (2006), the origins of standards-based reporting can be traced to the concept of a performance standard. The term was popularized in a 1993 report commonly referred to as the Malcom Report in deference to Shirley M. Malcom, chair of the planning group. The report defined a “performance standard” as “how good is good enough” (National Education Goals Panel, 1993, pp. ii–iii). Since then, a popular practice has been to define student performance in terms of four categories: advanced, proficient, basic, and below basic. The scheme has its roots in the work of the National Assessment of Educational Progress. As Popham (2003) noted:

      Increasingly, U.S. educators are building performance standards along the lines of the descriptive categories used in the National Assessment of Educational Progress (NAEP), a test administered periodically under the auspices of the federal government. NAEP results permit students’ performances in participating states to be compared…. Since 1990, NAEP results have been described in four performance categories: advanced, proficient, basic, and below basic. Most of the 50 states now use those four categories or labels quite similar to them. (p. 39)

      The actual practice of standards-based reporting requires the identification of what we have referred to as reporting topics or measurement topics (Marzano, 2006; Marzano & Haystead, 2008). For example, consider the following common measurement topics for language arts at the fourth grade:

       Reading

      Word recognition and vocabulary

      Reading comprehension

      Literary analysis

       Writing

      Spelling

      Language mechanics and conventions

      Research and technology

      Evaluation and revision

       Listening and Speaking

      Listening comprehension

      Analysis and evaluation of oral media

      Speaking applications

      Here, ten measurement topics are organized under three categories (or strands, as some districts call them): reading, writing, and listening and speaking. For reporting purposes, each student would receive a score of advanced, proficient, basic, or below basic on each of the ten measurement topics. Typically, some type of rubric or scale that describes these levels is constructed for each measurement topic (we discuss this in depth in chapters 3, 5, and 6).

      While this system seems like good practice, without giving teachers guidance and support on how to collect and interpret the assessment data with which scores like advanced, proficient, basic, and below basic are assigned, standards-based reporting can be highly inaccurate. Indeed, at the writing of this book, no major study (that we are aware of) has demonstrated that simply grading in a standards-based manner enhances student achievement. However, as the previous discussion illustrates, a fairly strong case can be made that student achievement will be positively affected if standards-based reporting is rooted in a clear-cut system of formative assessments.

      Another problem that plagues standards-based reporting is the lack of distinction between standards-referenced systems and standards-based systems. Grant Wiggins (1993, 1996) was perhaps the first modern-day educator to highlight the differences between a standards-based system and a standards-referenced system. In a standards-based system, a student does not move to the next level until he or she can demonstrate competence at the current level. In a standards-referenced system, a student’s status is reported (or referenced) relative to the performance standard for each area of knowledge and skill on the report card; however, even if the student does not meet the performance standard for each topic, he or she moves to the next level. Thus, the vast majority of schools and districts that claim to have standards-based systems in fact have standards-referenced systems. As we shall see in chapter 6, both systems are viable, but they are quite different in their underlying philosophies. Understanding the distinctions between standards-based and standards-referenced systems helps schools and districts design a grading system that meets their needs.

      In subsequent chapters, we draw from the research and theory in this chapter and from sources such as Classroom Assessment and Grading That Work (Marzano, 2006) and Designing and Teaching Learning Goals and Objectives (Marzano, 2009) to discuss how formative assessment can be effectively implemented in the classroom. We also outline a system of grading that, when used uniformly and consistently, can yield much more valid and reliable information than that provided by traditional grading systems.

      As mentioned in the introduction, as you progress through the remaining chapters, you will encounter exercises that ask you to examine the content presented. Some of these exercises ask you to answer specific questions. Answer these questions and check your answers with those provided in the back of the book. Other exercises are more open-ended and ask you to generate applications of what you have read.

      Chapter 2

       THE ANATOMY OF FORMATIVE ASSESSMENT

      The discussion in chapter 1 highlights both the interest in and the confusion about formative assessment and its use in K–12 classrooms. An obvious question one might ask is, Why the confusion? To answer this question, it is useful to understand some history about the term formative assessment. Initially, it was used in the field of evaluation. In an American Educational Research Association monograph series published in 1967, Michael Scriven pointed out the distinction between evaluating projects that were being formulated and evaluating those that had evolved to their final state. The former were referred to as formative evaluations and the latter were referred to as summative evaluations.

      In the world of projects, the distinction between formative evaluation and summative evaluation makes perfect sense. Consider a project in which a new curriculum for elementary school mathematics is being developed. There is a clear beginning point at which the authors of the program start putting their ideas on paper. There are benchmarks along the way, such as completing a first draft, gathering feedback on that draft, and making revisions based on the feedback. Finally, there is a clear ending point when the new curriculum has been published and is being distributed to schools.

      According to Popham (2008), Benjamin Bloom tried in 1969 to transplant the formative/summative evaluation distinction directly into assessment, but “few educators were interested in investigating this idea further because it seemed to possess few practical implications for the day-to-day world of schooling” (p. 4). As described in chapter 1, it would take until the Black and Wiliam (1998a) synthesis for the idea to catch on. At that time, they offered the following definition of formative assessment:

      Formative assessment … is to be interpreted as all of those activities undertaken by teachers and/or by students which provide information to be used as feedback to modify the teaching

Скачать книгу