Practitioner's Guide to Using Research for Evidence-Informed Practice. Allen Rubin
Чтение книги онлайн.
Читать онлайн книгу Practitioner's Guide to Using Research for Evidence-Informed Practice - Allen Rubin страница 11
The current and more comprehensive definition of EIP – one that is more consistent with definitions that are prominent in the current human service professions literature – views EIP as a process, as follows: EIP is a process for making practice decisions in which practitioners integrate the best research evidence available with their practice expertise and with client attributes, values, preferences, and circumstances. In other words, practice decisions should be informed by, and not necessarily based on, research evidence. Thus, opposing EIP essentially means opposing being informed by scientific evidence!
In the EIP process, practitioners locate and appraise credible evidence as an essential part, but not the only basis, for practice decisions. The evidence does not dictate the practice. Practitioner expertise such as knowledge of the local service context, agency capacity, and available resources, as well as experience with the communities and populations served, must be considered. In addition, clients are integral parts of the decision-making process in collaboration with the practitioner. Indeed, it's hard to imagine an intervention that would work if the client refuses to participate!
Moreover, although these decisions often pertain to choosing interventions and how to provide them, they also pertain to practice questions that do not directly address interventions. Practitioners might want to seek evidence to answer many other types of practice questions, as well. For example, they might seek evidence about client needs, what measures to use in assessment and diagnosis, when inpatient treatment or discharge is appropriate, understanding cultural influences on clients, determining whether a child should be placed in foster care, and so on. They might even want to seek evidence about what social justice causes to support. In that connection, there are six broad categories of EIP questions, as follows:
1 What factors best predict desirable or undesirable outcomes?
2 What can I learn about clients, service delivery, and targets of intervention from the experiences of others?
3 What assessment tool should be used?
4 Which intervention, program, or policy has the best effects?
5 What are the costs of interventions, policies, and tools?
6 What are the potential harmful effects of interventions, policies, and tools?
1.3 Types of EIP Questions
Let's now examine each of the preceding six types of questions. We'll be returning to these questions throughout this book.
1.3.1 What Factors Best Predict Desirable or Undesirable Outcomes?
Suppose you work in a Big Brother/Big Sister agency and are concerned about the high rate of mentor-youth matches that end prematurely. A helpful study might analyze case-record data in a large sample of Big Brother/Big Sister agencies and assess the relationships between duration of mentor-youth match and the following mentor characteristics: age, race, ethnicity, socioeconomic status, family obligations, residential mobility, reasons for volunteering, benefits expected from volunteering, amount and type of volunteer orientation received, and so on. Knowing which factors are most strongly related to the duration of a match (whether long or short) can inform your decisions about how to improve the duration of matches. For example, suppose you find that when taking into consideration lots of different factors, the longest matches are those in which the youth and mentor are of the same race and ethnicity. Based on what you learn, you may decide more volunteers who share the same ethnicity as the youth being served are needed, efforts to match existing volunteers and youth based on race and ethnicity should be implemented, or (evidence-informed) training in culturally sensitively mentoring should be provided to mentors.
Suppose you are a child welfare administrator or caseworker and want to minimize the odds of unsuccessful foster-care placements, such as placements that are short-lived, that subject children to further abuse or that exacerbate their attachment problems. Your EIP question might be: “What factors best distinguish between successful and unsuccessful foster-care placements?” The type of research evidence you would seek to answer your question (and thus inform practice decisions about placing children in foster care) likely would come from case-control studies and other forms of correlational studies that are discussed in Chapter 9 of this book.
A child welfare administrator might also be concerned about the high rate of turnover among direct-service practitioners in the agency, and thus might pose the following EIP question: “What factors best predict turnover among child welfare direct-care providers?” For example, is it best to hire providers who have completed specialized training programs in child welfare or taken electives in it? Or will such employees have such idealistic expectations that they will be more likely to experience burnout and turnover when they experience the disparity between their ideals and service realities of the bureaucracy? Quite a few studies have been done addressing these questions, and as an evidence-informed practitioner, you would want to know about them.
1.3.2 What Can I Learn about Clients, Service Delivery, and Targets of Intervention from the Experiences of Others?
If you administer a shelter for homeless people, you might want to find out why so many homeless people refuse to use shelter services. You may suspect that the experience of living in a shelter is less attractive than other options. Perhaps your EIP question would be: “What is it like to stay in a shelter?” Perhaps you've noticed that among those who do use your shelter there are almost no females. Your EIP question might therefore be modified as follows: “What is it like for females to stay in a shelter?” To answer those questions, you might read various qualitative studies that employed in-depth, open-ended interviews of homeless people that include questions about shelter utilization. Equally valuable might be qualitative studies in which researchers themselves lived on the streets among the homeless for a while as a way to observe and experience the plight of being homeless, what it's like to sleep in a shelter, and the meanings shelters have to homeless people.
Direct-service practitioners, too, might have EIP questions about their clients' experiences. As mentioned previously, one of the most important factors influencing service effectiveness is the quality of the practitioner-client relationship, and that factor might have more influence on treatment outcome than the choices practitioners make about what particular interventions to employ. We also know that one of the most important aspects of a practitioner's relationship skills is empathy. It seems reasonable to suppose that the better the practitioner's understanding of what it's like to have had the client's experiences – what it's like to have walked in the client's shoes, so to speak – the more empathy the practitioner is likely to convey in relating to the client.
The experiences of others, not just clients, may also drive your EIP questions. For example, imagine that you are an administrator of a child and family program and you are considering choosing and adopting a new parent-training model. Selecting and implementing a new intervention model is a complex process with lots of moving parts and potentially unforeseen consequences. In this case, your EIP question may be: “What is the adoption and implementation process like for different parent-training programs?” Studies that include interviews with administrators and staff about their experience with the implementation process in their agencies could give you information about which model to choose, alert you to unanticipated challenges with the