Collaborative Approaches to Evaluation. Группа авторов

Чтение книги онлайн.

Читать онлайн книгу Collaborative Approaches to Evaluation - Группа авторов страница 6

Collaborative Approaches to Evaluation - Группа авторов Evaluation in Practice Series

Скачать книгу

Evaluation (Fetterman, 1994; Fetterman & Wandersman, 2005)

       Indigenous Evaluation Framework (LaFrance & Nichols, 2008)

       Most Significant Change Technique (Davies & Dart, 2005; Serrat, 2009)

       Rapid Rural Appraisal (Chambers, 1981)

       Participatory Action Research (Fals-Borda & Anisur-Rahman, 1991; Wadsworth, 1998)

       Participatory Evaluation (Practical, Transformative) (Cousins & Whitmore, 1998; King, 2005; Whitmore, 1998a)

       Principle-focused Evaluation (Patton, 2017)

       Stakeholder-based Evaluation (Bryk, 1983)

       Transformative Research and Evaluation (Mertens, 2009)

       Utilization-Focused Evaluation (Patton, 1978, 2008)

      In considering the list of family members appearing in Box 1, it is important to keep in mind that the list is incomplete. But it is also critical to recognize the fluid nature of participation and collaboration, even within a single project. For example, an evaluation might start out to be highly collaborative but in response to resource constraints, competing interests or other emerging, perhaps totally unforeseen exigencies, it may become less so. It may even be the case that the evaluation is ultimately completed only by evaluator members of the team. Yet, we would still consider such an example to be an instance of CAE because it involved at some point, members of the program community in the knowledge production process.

      Another consideration is that some members of the list, depending on precisely how they are implemented, may or may not be collaborative. Consider, for example, contribution analysis and utilization focused analysis. While both approaches are framed as reliant on stakeholder participation and genuine contribution, it may be entirely possible to implement these approaches in such ways that participation is merely performative or symbolic.

      And so, in considering whether a specific evaluation is collaborative or not, it is always important to come back to the essential criterion: Did nonevaluator members of the program community authentically engage with evaluators in the evaluative knowledge production process? This, regardless of how the approach is labelled.

      When Do We Use CAE?

      Many would agree that there are two fundamental functions for evaluation. On the one hand, there is the accountability function—the main driver of technocratic approaches favored by public sector governance and bi- or multilateral aid agencies (Chouinard, 2013). On the other hand, is the learning function, which has appeal to a much broader range of stakeholders (Dahler-Larsen, 2009; Preskill, 2008; Preskill & Torres, 2000). Arguably, another consideration is the transformational function of evaluation (Cousins, Hay, & Chouinard, 2015; Mertens, 2009), which seems particularly relevant to CAE considerations as we elaborate below. We argue that CAE is most suited to evaluation contexts where learning and/or transformational concerns are paramount, although some aspects of accountability are implicated as well.

      When It’s About More Than Impact

      The accountability function is essential to the overt demonstration of fiscal responsibility, that is, showing the wise and justifiable use of public and donor funds. It comes as no surprise that in accountability-driven evaluation, the main interests being served are those of the senior decision and policy makers on behalf of taxpayers and donors. As such, a premium is placed on impact evaluation particularly on the impartial demonstration of the propensity for interventions to achieve their stated objectives. Such information needs are generally not well served by CAE, although some approaches are sometimes used to these ends (e.g., contribution analysis, empowerment evaluation, most significant change technique). In fact, contribution analysis seems well suited in this regard (Mayne, 2001, 2012). Contribution analysis is committed to providing an alternative to obsessing about claims of program attribution to outcomes through the use of a statistical counterfactual; instead, it focuses on supporting program

Скачать книгу