Collaborative Approaches to Evaluation. Группа авторов

Чтение книги онлайн.

Читать онлайн книгу Collaborative Approaches to Evaluation - Группа авторов страница 7

Collaborative Approaches to Evaluation - Группа авторов Evaluation in Practice Series

Скачать книгу

claims through the use of plausible, evidence-based performance stories. While the accountability agenda is and is always likely to be essential and necessary, many have observed that reliance on associated single-minded evaluation approaches serves to diminish, even marginalize the interests of the much broader array of stakeholders (e.g., Carden, 2010; Chouinard, 2013; Hay, 2010).

      If we take into account, indeed embrace, the legitimate information needs of a very broad array of program and evaluation stakeholders, traditional mainstream evaluation designs are not likely to be particularly effective in meeting those needs. What good, for example, is a black box approach to evaluation (e.g., randomized controlled trial) to program managers whose main concern is to improve program performance, thereby making it more effective and cost-efficient? Or how could such an evaluation possibly assist program developers to truly appreciate the contextual exigencies and complex circumstances within which the focal program is expected to function and how to design interventions in ways that will suit? What about the program consumers? It is relatively easy to imagine that their concerns would be associated with their experience with the program and their sense of the extent to which it is making a difference for them. Evaluations which are single-mindedly focused on demonstrating program impact are likely to be of only minimal value for such people, if any at all.

      Single-minded impact evaluations are likely to be best suited to what Mark (2009) has called fork-in-the-road decisions. When decisions to continue to fund or to terminate programs define the information needs associated with the impetus for evaluation, the evaluation will be exclusively summative in nature and orientation. But such decisions, as a basis for guiding evaluation, are relatively rare. Often, it is the case that summative and formative, improvement-oriented evaluation interests are comingled with summative questions about the extent to which programs are meeting their objectives and demonstrating effectiveness (Mark, 2009).

      To the extent that formative interests are prevalent in guiding the impetus for evaluation, the learning function of evaluation carries weight, and CAE would be a viable evaluation option to consider. In formative evaluations, program community members, particularly primary users who are well-positioned to leverage change on the basis of evaluation findings (Alkin, 1991; Patton, 1978), stand to learn a great deal about the focal program or intervention as well as the context within which it is being implemented. Creating the opportunity for such learning, some would argue, is a hallmark of CAE (e.g., Cousins & Chouinard, 2012; Dahler-Larsen, 2009).

      When It’s Developmental

      In addition to, and quite apart from, summative and formative evaluation designs, is developmental evaluation (DE) (Patton, 1994, 2011). Unlike contexts where the specific intervention already exists and is being implemented, in DE, evaluators work alongside organizational and program community members to identify and develop innovative interventions through the provision of evidence-based insights. With evaluators at the decision-making table, DE by definition is collaborative and therefore a member of the CAE family.

      Despite the argument that DE is distinct from summative and formative approaches, accountability and learning functions remain paramount. DE is all about creating innovative interventions through evidence-based learning, sometimes through trial-and-error, but accountability considerations factor in as well. For example, one of us (Shulha) is currently involved in a multisite DE in the Ontario education sector where accountability is being defined as taking snapshots over time where each picture describes what the team is doing; why the team is doing it; evidence (stories) that can confirm that logic is sound and that the appropriate needs are being addressed; and next-step planning.

      Most certainly in developmental contexts, actors stand to benefit from the use of evaluation findings, be they instrumental or conceptual. But they also stand to benefit from their proximity to, or even participation in, evaluative activities. Patton (1997) dubbed learning of this sort process use, a phenomenon which has been actively studied and integrated into contemporary thinking about evaluation consequences (Cousins, 2007; Shulha & Cousins, 1997). Process use is a very powerful benefit of CAE and indeed can factor directly into decisions to use such approaches.

      When Transformation Is Intentional

      Given its evident connection to evaluation-related learning, process use is very much implicated in ECB and therefore highly relevant in evaluations that are intended to be transformational in form and function (Mertens, 2009; Whitmore, 1998b). In transformational approaches, interest is less about generating evaluation findings that will be acted upon to leverage change and more about the experience. Through participation in the cocreation of evaluation knowledge, members of the program community, particularly intended beneficiaries of interventions, stand to profit. Much of this benefit will be cognitive or conceptual, which is to say, members stand to learn not only about the program and its functions but also about the historical, political, social, and educational aspects of the context in which it is situated. But of course, the idea is that when people critically analyze and learn about their situation, they will use this learning to push for change (Freire, 1970). It is through the deepening of understanding by virtue of engaging with evaluation that transformation and/or empowerment is likely to occur (Mertens, 2009).

      Previously we discussed tensions between accountability and learning, which are often acknowledged as fundamental functions of evaluation, and we hinted that transformation may provide a third perspective. In a recent chapter, Cousins, Hay, and Chouinard (2015) argued that learning is often juxtaposed to compliance-oriented accountability as opposed to accountability as a democratic process, and that this is the root source of tension between the two. The authors went on to argue that

      when rooted in transformative participatory evaluation approaches and motivated by political, social-justice interests, accountability and learning approaches are no longer in opposition … [they are] essential, necessary, and supplementary, to be most appealing and indeed, necessary if evaluation is to be relevant to addressing issues of poverty, inequity and injustice. (p. 107)

      Transformational interests provide a natural fit for CAE.

      Why CAE?

      The Three P’s of CAE Justification

      For some time, we have tried to capture justifications for CAE as being a blend of three specific categories: pragmatic, political, and philosophical (Cousins & Chouinard, 2012; Cousins, Donohue, & Bloom, 1996; Cousins & Whitmore, 1998). These categories, to our way of thinking, are not mutually exclusive; the justification for any CAE will draw from two or more of them depending on interests, and perhaps more importantly, whose interests are being served. Pragmatic interests driving CAE are all about leveraging change through the use of evaluative evidence, in other words, using evaluation for practical problem solving. Of primary concern would be instrumental (discrete decision-making about interventions) and conceptual (learning) uses of evaluation findings. Program community members working with evaluators learn about how to change programs to improve them or make them more effective. Historically, we have considered political interests driving CAE to be largely socio-political and focused on empowerment and the amelioration of social inequity. Through participation in the evaluation knowledge production function, intended program beneficiaries (often from marginalized populations) and other program community members learn to see their circumstances differently and to recognize oppressive forces at play. Such engagement may lead to the development of an ethos of self-determination. Finally, philosophic justifications for CAE are grounded in a quest for deeper understanding of the complexities associated with the program and the context within which it is operating. Through evaluators working hand-in-hand with program community members, the joint production of knowledge is grounded in historical, sociopolitical, economic, and educational context. Thanks to the insider insights of participating program community members, deeper meaning of evaluative

Скачать книгу