Practitioner's Guide to Using Research for Evidence-Informed Practice. Allen Rubin

Чтение книги онлайн.

Читать онлайн книгу Practitioner's Guide to Using Research for Evidence-Informed Practice - Allen Rubin страница 32

Practitioner's Guide to Using Research for Evidence-Informed Practice - Allen  Rubin

Скачать книгу

What intervention, program or policy has the best effects?

      You should keep in mind that this is not an exhaustive list of types of research studies you might encounter and that this table assumes that these study designs are executed with a high level of quality. As you read on in this book, you'll learn a lot more about these and other study designs and how to judge the quality of the research evidence related to specific EIP questions.

      Several decades ago, it started to become fashionable among some academics to raise philosophical objections to the traditional scientific method and the pursuit of logic and objectivity in trying to depict social reality. Among other things, they dismissed the value of using experimental design logic and unbiased, validated measures as ways to assess the effects of interventions.

      Although various writings have debunked their arguments as “fashionable nonsense” (Sokal & Bricmont, 1998), some writers continue to espouse those arguments. You might encounter some of their arguments – arguments that depict the foregoing hierarchy as obsolete. Our feeling is that these controversies have been largely laid to rest, so we only briefly describe some of these philosophical controversies here.

      Using such terms as postmodernism and social constructivism to label their philosophy, some have argued that social reality is unknowable and that objectivity is impossible and not worth pursuing. They correctly point out that each individual has his or her own subjective take on social reality. But from that well-known fact they leap to the conclusion that because we have multiple subjective realities, that's all we have, and that because each of us differs to some degree in our perception of social reality, an objective social reality, therefore, does not exist. Critics have depicted their philosophy as relativistic because it espouses the view that because truth is in the eyes of the beholder it is therefore unknowable and that all ways of knowing are therefore equally valid. We agree that we can never be completely objective and value free, but this doesn't mean that we should leap to the conclusion that it's not worth even trying to protect our research as best we can from our biases. It does not follow that just because perfect objectivity is an unattainable ideal we should therefore not even try to minimize the extent to which our biases influence our findings. Furthermore, if relativists believe it is impossible to assess social reality objectively and that anyone's subjective take on external reality is just as valid as anyone else's, then how can they proclaim their view of social reality to be the correct one? In other words, relativists argue that all views about social reality are equally valid, but that our view that it is knowable is not as good as their view that it is unknowable. Say what?!

      Some have argued that an emphasis on objectivity and logic in research is just a way for the powers that be to keep down those who are less fortunate and that if a research study aims to produce findings that support noble aims, then it does not matter how biased its design or data collection methods might be. However, critics point out that the idea that there is no objective truth actually works against the aim of empowering the disenfranchised. If no take on social reality is better than any other, then on what grounds can advocates of social change criticize the views of the power elite? For example, during the Trump administration his spokespeople at times dismissed facts that they didn't like by proclaiming to have their own “alternative facts.” Those spokespeople were dismissing objective facts as just someone else's version of reality that had no more credence than the Trump version of reality.

       A prominent misconception is that EIP implies an overly restrictive hierarchy of evidence – one that only values evidence produced by tightly controlled quantitative studies employing experimental designs.

       EIP does not imply a black-and-white evidentiary standard in which evidence has no value unless it is based on experiments.

       Not all EIP questions imply the need to make causal inferences about intervention effects.

       Different research hierarchies are needed for different types of EIP questions.

       Qualitative studies tend to employ flexible designs and subjective methods – often with small samples of research participants – in seeking to generate tentative new insights, deep understandings, and theoretically rich observations.

       Quantitative studies put more emphasis on producing precise and objective statistical findings that can be generalized to populations or on designs with logical arrangements that are geared to testing hypotheses about whether predicted causes really produce predicted effects.

       Although some scholars who favor qualitative inquiry misperceive EIP as devaluing qualitative research, countless specific kinds of EIP questions would be applicable to a hierarchy where qualitative studies might reside at the top.

       Correlational and qualitative studies can be useful in identifying factors that predict desirable or undesirable outcomes.

       Qualitative studies would reside at the top of a research hierarchy for EIP questions that ask: “What can I learn about clients, service delivery, and targets of intervention from the experiences of others?”

       Various kinds of studies can be used to answer the question: “What assessment tool should be used?”

       When seeking evidence about whether a particular intervention – and not some alternative explanation – is the real cause of a particular outcome, experiments are near the top of the hierarchy of research designs, followed by quasi-experiments with relatively low vulnerabilities to selectivity biases.

       Because of the importance of replication, systematic reviews and meta-analyses – which attempt to synthesize and develop conclusions from the diverse studies and their disparate findings – reside above experiments on the evidentiary hierarchy for EIP questions

Скачать книгу