The Handbook for Collaborative Common Assessments. Cassandra Erkens

Чтение книги онлайн.

Читать онлайн книгу The Handbook for Collaborative Common Assessments - Cassandra Erkens страница 6

The Handbook for Collaborative Common Assessments - Cassandra Erkens

Скачать книгу

then individual teachers must give the assessment in a relatively short time frame so that they can collaboratively respond in a timely fashion.

      Imagine that a team has designed an assessment task that requires students to use the school’s only computer lab, so the team members’ students take turns using it (for example, teacher A’s students use the lab and complete the task in September, teacher B’s in October, and teacher C’s in November). This is the same assessment, but it does not function as a common assessment should. The team members provide the exact same task with the same criteria and grade-level content. However, the team members are on their own for strategizing how to intervene or extend the learning for their individual classrooms. They miss the power of the collective wisdom and creativity of their peers in addressing the challenges that emerge from their individual results. In a case where teachers do not give the same assessment in the same time frame, teams can only look at the data in hindsight and then produce program-level improvements that answer the following questions.

      • “Was the assessment appropriate and engaging?”

      • “Were the scoring criteria accurate, consistently applied, and sufficient?”

      • “Did the curriculum support the learners in accomplishing the task?”

      • “Were the instructional strategies successful overall? Do we need to make any changes moving forward?”

      The pace of data collection in this case cannot support instructional agility. The learners in September will not benefit from the team’s findings in November, when all the learners have finished the task.

      The collaborative common assessment process requires teamwork to help ensure accurate data; timely re-engagement; consistent scoring; and alignment between standards, instruction, and assessment so all students learn. Collaboration is central to the process as teams examine results, plan instructionally agile responses, analyze errors, and explore areas for program improvement.

      Collaboratively Examined Results

      When teachers use a common assessment, that does not guarantee it will generate common results. The notion of common data implies a high degree of inter-rater reliability, meaning the data generated are scored similarly from one rater to the next. Even when using test questions that have clear right and wrong answers, teachers can generate uncommon results. For example, teachers may interpret student responses differently, or some teachers may offer partial credit for reasoning while others only offer credit for right answers. Many variables impact the scoring process, and many perceptions lead teachers to different conclusions, which can create data inconsistency from classroom to classroom. No matter the test method, teachers must practice scoring together on a consistent basis so that they can build confidence that they have inter-rater reliability and accurate data.

      Instructionally Agile Responses

      The purpose of using collaborative common assessments is to impact learning in positive, responsive, and immediate ways, for both students as learners and teachers as learners. When teachers analyze assessment data to inform real-time modifications within the context of the expected learning, they improve their instructional agility and maximize the assessment’s impact on learning. It seems logical that teams of high-quality instructors will have more instructional agility than an individual teacher for the following reasons.

      • More accurate inferences: Teams have more reviewers to examine the results, conduct an error analysis regarding misconceptions, and collaboratively validate their inferences.

      • Better targeted instructional responses: Teams have more instructors to problem solve and plan high-quality extension opportunities for those who have established mastery, as well as appropriate corrective instruction for those who have various misconceptions, errors, or gaps in their knowledge and skills.

      • Increased opportunities for learners: Teams simply have more classroom teachers surrounding the learner who can provide informed interventions and skilled monitoring for continued follow-up.

      This is not to suggest that teams will always develop better solutions than individual teachers might, especially if an individual teacher has reached mastery in his or her craft, knowledge, and skill. Rather, it is to suggest that educators can increase the likelihood of accuracy, consistency, and responsiveness over time if they collaboratively solve complex problems with the intention to increase their shared expertise and efficacy.

      Error Analysis

      There is no such thing as a perfect test; all tests will have some margin of error. So typically, before teachers employ a measurement tool (such as a scale, rubric, or scoring guide) or an assessment (such as a test, an essay, or a performance task), the designers must attempt to find, label, and address the potential errors in the measurement tool, the assessment, or the administration process itself, noting that a margin of error could exist in the findings. This practice helps trained test designers review the results for any potential dangers in students’ resulting inferences. By using a similar error-analysis process, classroom teachers—not trained as assessment experts—can identify potential mistakes and misconceptions in their classroom assessments. Error analysis involves examining various students’ responses to an individual task, prompt, or test item and then identifying and classifying the types of errors found. Identifying the learners’ errors is critical to generating instructionally agile responses that guide the learners’ next steps, as the type of error dictates the appropriate instructional response.

      Program Improvements

      A benefit of engaging in collaborative common assessments involves gathering local program improvement data. When teachers do not create, use, and analyze assessments collaboratively and commonly, they have only isolated data to offer. Such data are filled with more questions than answers: What happened in that classroom? Was it an anomaly? Or, did the instruction, the chosen curricular resources, the pacing, the use of formative assessments, or the student engagement practices cause it? The data from one classroom to the next will have too many variables to provide valid and reliable schoolwide improvement data. When data are common and teams assemble them in comparative ways, however, patterns, themes, and compelling questions emerge. These allow teams to make more informed, strategic decisions and establish inquiry-based efforts to answer complex problems. Using common data, teams may focus their program improvements in the following areas.

      • Curriculum alignment and modifications: Teams make certain that they have selected a rigorous curriculum that aligns with the standards. For example, using collaborative common assessment data, team members might discover they need to increase their focus on nonfiction texts, which alters their future curricular choices.

      • Instructional strategies and models: Having teams analyze instructional strategies and models or programs does not mean teachers must teach in the exact same ways. It does mean, however, that teachers must isolate the strategies (which they can deliver with their own creative style) that work best with rigorous content, complex processes, or types of challenges that learners may be experiencing.

      • Assessment modifications: When assessment results go awry, teams will often engage in improving the assessment before they examine curriculum or instructional implications. But by doing so, teams can accidentally lower the assessment’s rigor to help learners meet the target when the assessment may not have caused the issue. For this reason, teams should explore needed assessment modifications after they explore curriculum alignment and instructional implications. But it is always important that teams examine the assessment itself. Sometimes, weak directions or confusing questions or prompts are the variables

Скачать книгу