Collaborative Approaches to Evaluation. Группа авторов
Чтение книги онлайн.
Читать онлайн книгу Collaborative Approaches to Evaluation - Группа авторов страница 6
![Collaborative Approaches to Evaluation - Группа авторов Collaborative Approaches to Evaluation - Группа авторов Evaluation in Practice Series](/cover_pre877716.jpg)
Indigenous Evaluation Framework (LaFrance & Nichols, 2008)
Most Significant Change Technique (Davies & Dart, 2005; Serrat, 2009)
Rapid Rural Appraisal (Chambers, 1981)
Participatory Action Research (Fals-Borda & Anisur-Rahman, 1991; Wadsworth, 1998)
Participatory Evaluation (Practical, Transformative) (Cousins & Whitmore, 1998; King, 2005; Whitmore, 1998a)
Principle-focused Evaluation (Patton, 2017)
Stakeholder-based Evaluation (Bryk, 1983)
Transformative Research and Evaluation (Mertens, 2009)
Utilization-Focused Evaluation (Patton, 1978, 2008)
Other contributors have chosen to use different terms to describe the genre or subsets of it. For example, King considers “participatory evaluation [to be] an overarching term for any evaluation approach that involves program staff or participants actively participating in decision-making and other activities related to the planning and implementation of evaluation studies” (2005, p. 291, emphasis in the original). As mentioned above, in our own work, we had often used the terms participatory and collaborative approaches (see Cousins & Chouinard, 2012; Cousins & Whitmore, 1998) in a generic and encompassing way. Yet CAE strikes us as being preferable because all approaches involve collaboration; people jointly working together. In another example, Fetterman and colleagues (2018) use the term stakeholder involvement approaches in an overarching manner. Although they never really define the term, their use of it appears to be limited in scope only to collaborative evaluation, participatory evaluation, and empowerment evaluation approaches which, by no coincidence, correspond with the name of the topical interest group (TIG) of the American Evaluation Association (i.e., TIG-CPE).3 The Fetterman et al. (2018) book is devoid of any recognition of the other approaches listed in Box 1. In addition to limited scope, the term stakeholder involvement in evaluation could easily imply program community members acting merely as sources of data for evaluation, as opposed to being the cocreators of the evaluation knowledge. This potential for confusion is unsatisfactory in our view. The standard dictionary definition of collaboration is “the act of working with another or others on a joint project—often followed by on, with, etc.,”4 and this is precisely how we define CAE: the joint engagement of evaluators working with nonevaluators in planning, implementing, and disseminating evaluation.
3 David Fetterman, Liliana Compos-Rodriguez, and Ann Zukoski are longtime members of the Collaborative, Participatory and Empowerment TIG, one of AEA’s largest, founded by David Fetterman in 1994. Fetterman has served in the role of chair/cochair since the TIG’s inception—a remarkable 24 years and running.
In considering the list of family members appearing in Box 1, it is important to keep in mind that the list is incomplete. But it is also critical to recognize the fluid nature of participation and collaboration, even within a single project. For example, an evaluation might start out to be highly collaborative but in response to resource constraints, competing interests or other emerging, perhaps totally unforeseen exigencies, it may become less so. It may even be the case that the evaluation is ultimately completed only by evaluator members of the team. Yet, we would still consider such an example to be an instance of CAE because it involved at some point, members of the program community in the knowledge production process.
Another consideration is that some members of the list, depending on precisely how they are implemented, may or may not be collaborative. Consider, for example, contribution analysis and utilization focused analysis. While both approaches are framed as reliant on stakeholder participation and genuine contribution, it may be entirely possible to implement these approaches in such ways that participation is merely performative or symbolic.
From a different perspective, let us consider the case of evaluation and indigenous peoples. There are many examples of cross-cultural evaluations that have been manifestly participatory, qualifying as CAE (see Chouinard & Cousins, 2007). Yet, as we learned from a keynote panel session at the Canadian Evaluation Society 2018 annual meeting,5 it would be a mistake to categorize peoples from indigenous cultures as a homogeneous group; some such cultures may not be particularly collaborative. What would be the contextual appropriateness of CAE in such contexts? But perhaps even more to the point, in the panel Wehipeihana provided a hierarchical profile of growth for considering the interface between evaluation and indigenous peoples progressing from evaluation to indigenous peoples, to evaluation for, evaluation with, evaluation by, and ultimately evaluation as indigenous peoples. In such a conception evaluation to and evaluation as would not qualify as CAE if they did not involve authentic participation in evaluation knowledge production by evaluators and program community members.
5 https://evaluationcanada.ca/news/10076
And so, in considering whether a specific evaluation is collaborative or not, it is always important to come back to the essential criterion: Did nonevaluator members of the program community authentically engage with evaluators in the evaluative knowledge production process? This, regardless of how the approach is labelled.
When Do We Use CAE?
Many would agree that there are two fundamental functions for evaluation. On the one hand, there is the accountability function—the main driver of technocratic approaches favored by public sector governance and bi- or multilateral aid agencies (Chouinard, 2013). On the other hand, is the learning function, which has appeal to a much broader range of stakeholders (Dahler-Larsen, 2009; Preskill, 2008; Preskill & Torres, 2000). Arguably, another consideration is the transformational function of evaluation (Cousins, Hay, & Chouinard, 2015; Mertens, 2009), which seems particularly relevant to CAE considerations as we elaborate below. We argue that CAE is most suited to evaluation contexts where learning and/or transformational concerns are paramount, although some aspects of accountability are implicated as well.
When It’s About More Than Impact
The accountability function is essential to the overt demonstration of fiscal responsibility, that is, showing the wise and justifiable use of public and donor funds. It comes as no surprise that in accountability-driven evaluation, the main interests being served are those of the senior decision and policy makers on behalf of taxpayers and donors. As such, a premium is placed on impact evaluation particularly on the impartial demonstration of the propensity for interventions to achieve their stated objectives. Such information needs are generally not well served by CAE, although some approaches are sometimes used to these ends (e.g., contribution analysis, empowerment evaluation, most significant change technique). In fact, contribution analysis seems well suited in this regard (Mayne, 2001, 2012). Contribution analysis is committed to providing an alternative to obsessing about claims of program attribution to outcomes through the use of a statistical counterfactual; instead, it focuses on supporting program