Practitioner's Guide to Using Research for Evidence-Informed Practice. Allen Rubin

Чтение книги онлайн.

Читать онлайн книгу Practitioner's Guide to Using Research for Evidence-Informed Practice - Allen Rubin страница 22

Practitioner's Guide to Using Research for Evidence-Informed Practice - Allen  Rubin

Скачать книгу

cognitive-behavioral treatment to the first 10 new clients referred to him with anxiety disorders. To measure outcome, a graduate student who does not know what the study is about is hired to interview clients briefly before and after treatment and ask them to rate their average daily anxiety level (from 0 to 100) during the previous seven days. Regardless of the findings, we can see that this study is more credible than the previous one. It has flaws, but its flaws are neither egregious nor fatal. Maybe, for example, there are some differences in the types of clients referred to the two therapists, making one group more likely to improve than the other. Maybe all the clients in both groups exaggerated the improvements in their anxiety levels because they wanted to believe the treatment helped them or wanted the study's findings to please their therapist.

      While these flaws may not be fatal, they are important. If you can find studies less flawed than that one, you'd probably want to put more stock in their findings. But if that study is the best one you can find, you might want to be guided by its findings. That is, it would offer somewhat credible – albeit quite tentative – evidence about the comparative effectiveness of the two treatment approaches. Lacking any better evidence, you might want – for the time being – to employ the seemingly more effective approach until better evidence supporting a different approach emerges or until you see for yourself that it is not helping your particular client(s).

      Unlike these fictitious examples, it is not always so easy to differentiate between reasonable “limitations and fatal flaws; that is, to judge whether the problems are serious enough to jeopardize the results or should simply be interpreted with a modicum of caution” (Mullen & Streiner, 2004, p. 118). What you learn in the rest of this book, however, will help you make that differentiation, and thus help you judge the degree of caution warranted in considering whether the conclusions of an individual study or a review of studies merit guiding your practice decisions.

      As discussed earlier in this and the preceding chapter, a common misinterpretation of EIP is that you should automatically select and implement the intervention that is supported by the best research evidence, regardless of your practice expertise, your knowledge of idiosyncratic client circumstances and preferences, and your own practice context. No matter how scientifically rigorous a study might be and no matter how dramatic its findings might be in supporting a particular intervention, there always will be some clients for whom the intervention is ineffective or inapplicable. When studies declare a particular intervention a “success,” this is most often determined by group-level statistics. In other words, the group of clients who received the successful intervention had better outcomes, on average, than those who did not receive the intervention. This doesn't give us much information about how any given individual might have responded to the intervention. In practice, we are interested in successfully treating each and every client, not just the average.

      Moreover, we often don't know why some clients don't benefit from our most effective interventions. Suppose an innovative dropout prevention program is initiated in one high school, and 100 high-risk students participate in it. Suppose a comparable high school provides routine counseling services to a similar group of 100 high-risk students. Finally, suppose only 20 (20%) of the recipients of the innovative program drop out, as compared to 40 (40%) of the recipients of routine counseling. By cutting the dropout rate in half, the innovative program would be deemed very effective. Yet it failed to prevent 20 dropouts.

      During the 1970s and 1980s, assertive case management came to be seen as a panacea for helping severely mentally ill patients dumped from state hospitals into communities in the midst of the deinstitutionalization movement. Studies supporting the effectiveness of assertive case management typically were carried out in states and communities that provided an adequate community-based service system for these patients. Likewise, ample funding enabled the case managers to have relatively low caseloads, sometimes less than 10 (Rubin, 1992). One study assigned only two cases at a time to their case managers and provided them with discretionary funds that they could use to purchase resources for their two clients (Bush et al., 1990). These were high-quality studies, and their results certainly supported the effectiveness of assertive case management when provided under the relatively ideal study conditions.

      Rubin had recently moved from New York to Texas at the time that those studies were emerging. His teaching and research in those days focused on the plight of the deinstitutionalized mentally ill. Included in his focus was the promise of, as well as issues in, case management. His work brought him into contact with various case managers and mental health administrators in Texas. They pointed out some huge discrepancies between the conditions in Texas compared to the conditions under which case management was found to be effective in other (northern) states. Compared to other states, and especially to those states where the studies were conducted, public funding in Texas for mental health services was quite meager. Case managers in Texas were less able to link their clients to needed community services due to the shortage of such services. Moreover, the Texas case managers lamented their caseloads, which they reported to be well in excess of 100 at that time. One case manager claimed to have a caseload of about 250! To these case managers, the studies supporting the effectiveness of assertive case management elsewhere were actually causing harm in Texas. That is, those studies were being exploited by state politicians and bureaucrats as a way to justify cutting costlier direct services with the rationale that they are not needed because of the effectiveness of (supposedly cheaper) case management services.

      In light of the influence of practice context, deciding which intervention to implement involves a judgment call based in part on the best evidence – in part on your practice expertise; in part on your practice context; and in part on the idiosyncratic characteristics, values, and preferences of your clients. While you should not underestimate the importance of your judgment and expertise in making the decision, neither should you interpret this flexibility as carte blanche to allow your practice predilections to overrule the evidence. The fact that you are well trained in and enjoy providing an intervention that solid research has shown to be ineffective or much less effective than some alternative is not a sufficient rationale to automatically eschew alternative interventions on the basis of your expertise. Likewise, you should not let your practice preferences influence your appraisal regarding which studies offer the best evidence.

Скачать книгу