An Educator's Guide to Schoolwide Positive Behavioral Inteventions and Supports. Jason E. Harlacher
Чтение книги онлайн.
Читать онлайн книгу An Educator's Guide to Schoolwide Positive Behavioral Inteventions and Supports - Jason E. Harlacher страница 9
Data: Supporting Decision Making
Once school teams identify the outcomes that they want to achieve, they then identify the data needed to measure progress toward those outcomes. School teams will identify specific sources of data, which we discuss in subsequent chapters, but school teams use data to answer two questions for all aspects of SWPBIS: (1) Are practices implemented with fidelity and (2) What is the impact of those practices? Implementation is the act of applying a certain practice, whereas implementation fidelity is the extent to which a practice is implemented as intended (also referred to simply as fidelity; Hosp, 2008; Wolery, 2011). Impact (synonyms include outcome or effect) is the benefit of that practice. To have the necessary data to answer questions about implementation and impact, school teams gather four types of data—(1) fidelity, (2) screening, (3) diagnostic, and (4) progress monitoring.
Are Practices Implemented With Fidelity?
Fidelity data gauge the extent to which practices are being implemented as intended. There are a variety of methods for measuring fidelity, but often observations of the practices, questionnaires about the practices, or checklists of the components of a practice help to document and check fidelity (Kovaleski, Marco-Fies, & Boneshefski, n.d.; Newton, Horner, et al., 2009; Newton, Todd, et al., 2009). By measuring (and ensuring a high degree of) fidelity, educators can be confident that a lack of desired outcomes is the result of an ineffective practice (in other words, even though the practice was implemented accurately, it still didn’t reach the desired outcome). Accordingly, they can also be confident that when a desired outcome is reached, it is because educators implement the practice with fidelity. If educators do not measure fidelity, they are lacking information as to the extent to which educators were using the practice correctly and they may misattribute failure to reach the outcome to the practice itself when fidelity is actually the culprit (Harlacher et al., 2014). Additionally, sometimes low fidelity might tell a team that a practice is not a good fit for a particular context, and the team can discuss whether certain modifications to the practice (retraining, providing additional resources, and so on) will achieve desired outcomes or whether it should consider a new practice or intervention. Fidelity measures are used for each solution implemented and for each level of support (Tier One, Tier Two, Tier Three). Fidelity measures calculate the overall implementation of the tiers of SWPBIS and implementation of individual interventions.
To provide an analogy for the importance of measuring fidelity, consider a person who wants to lose weight. This person sets a goal to lose eight pounds in one month by attending yoga four times per week. After one month, this person has lost four pounds. Without knowing if the person followed the exercise plan, it’s difficult to determine which is at fault—the fidelity or the plan. If the person did yoga four times per week and still did not reach the goal, we can assume the plan was not effective. However, if the person only did yoga two times per week, then we can’t know if the exercise plan would have worked or not—it wasn’t followed. Conversely, if the person met the goal and did not measure fidelity, we can’t be sure what led to the weight loss. Was the person lucky, or did the plan actually work?
When fidelity isn’t met and the goal isn’t met, we must adjust fidelity and then try again. If fidelity is met, we can conclude the plan didn’t work. Figure 1.3 illustrates the logical conclusions when examining a goal and fidelity. By ensuring that practices are implemented with fidelity, decision makers can determine the extent to which practices are effective in achieving goals.
Figure 1.3: Logic of fidelity and decision making.
Screening data identify those students who are at risk (Hosp, 2008). Office discipline referrals (ODRs) are commonly used within SWPBIS (Irvin, Tobin, Sprague, Sugai, & Vincent, 2004), but schools may also screen using social and behavioral assessments (Anderson & Borgmeier, 2010; Hawken, et al., 2009). School teams also use screening data to understand the extent to which the overall system is healthy—at least 80 percent of the student population is responding to Tier One universal supports and are low risk for chronic problem behaviors, 10 to 15 percent seem to have some risk and are responding to Tier Two interventions, and about 5 percent need additional individualized supports. If the SWPBIS system is not healthy, the screeners can help teams identify where to target additional Tier One supports for all students. If the system appears healthy, the screeners can help determine students who may need additional support (Hawken et al., 2009).
Once a team identifies a problem (for example, too many referrals on the playground) or when students are determined to need additional support, it uses diagnostic data to determine why the problem is occurring. Whereas screeners are brief measures of general outcomes, diagnostic tools take longer to administer; they dig into the context of the problem and provide extensive data on why the problem is occurring. For example, XYZ Elementary examined additional detailed office discipline referral data on specific behavior types, when the problems occurred, who got the referrals, and why there were numerous referrals occurring on the playground. From this additional information, the team could identify a reasonable solution. For individual students, teachers gather information on the purposes or functions of a behavior so the school staff can examine them to determine the reason behind the behavior. Schools commonly use ODRs to provide more detailed information on a student’s behavior, but the school staff may also use request-assistance forms or brief interviews with staff or students that will help identify the functions of behavior (Hawken et al., 2009). For some students, the staff may conduct a functional behavior assessment, an extensive assessment process designed to ascertain why a problem behavior is occurring and determine the environmental triggers and responses to that behavior (Crone & Horner, 2003).
What Is the Impact of These Practices?
Following the use of screening and diagnostic tools, teachers monitor the impact of solutions to the problem to ensure it is meeting the desired outcome. In progress monitoring, staff collects data to determine if support is effective while it is occurring to make formative decisions (Hosp, 2008). Schools use an array of methods and sources to monitor solutions, such as permanent products, daily behavior tracking cards, attendance, and ODRs (Rodriguez, Loman, & Borgmeier, 2016). The previous example (where XYZ set a goal to reduce playground lining-up referrals by 50 percent) demonstrates progress monitoring. The team set a goal to reach by two months, but it reviewed data every two weeks to determine the impact of their solution—providing tickets for lining up and free recess intervention—and to modify if needed. For progress monitoring the impact of supports for individual students, teachers can often use a screening tool as a progress-monitoring tool; for example, teachers use ODRs to screen students and to examine progress. However, there may be situations where the nature of the behavior will determine the exact method used for its monitoring. For example, a student with aggressive behavior may be monitored using methods that are more explicit and detailed than ODRs. Additionally, the intensity of monitoring changes for individual students depends on which level of support they are receiving. All students are essentially monitored using screening tools throughout