Collaborative Approaches to Evaluation. Группа авторов

Чтение книги онлайн.

Читать онлайн книгу Collaborative Approaches to Evaluation - Группа авторов страница 8

Collaborative Approaches to Evaluation - Группа авторов Evaluation in Practice Series

Скачать книгу

and knowledge is achieved.

      As mentioned, these categories are understood not to be mutually exclusive, and as such, any given CAE will place relative emphasis on one or more depending on information needs, contextual exigencies, and circumstances. Cousins and Whitmore (1998) identified two principal streams of participatory evaluation as being practical and transformative. The former would emphasize the pragmatic justification, whereas the latter privileges the political justification; both streams, however, draw from all three justifications. For example, in practical participatory evaluation, program community members may find the experience to be rewarding in terms of their own professional development even though the primary purpose is to generate knowledge supporting program improvement. Such capacity building is an example of process use even though it is an unintended positive consequence of the evaluation. On the other hand, transformative participatory evaluation where empowerment and capacity building are central may also lead to positive changes to interventions as a result of evaluation findings. We observe that Fetterman and colleagues (Fetterman & Wandersman, 2005; Fetterman et al., 2018) have followed this lead in describing two streams of empowerment evaluation.

      Ethical Considerations

      In our current work, we are considering a fourth justification for CAE, which is distinct from but also overlaps with the other three. Cousins and Chouinard (2018, forthcoming) are now seriously exploring an ethical or moral-political justification for CAE, which is rooted in considerations of responsibility, recognizing difference, representation, and rights. In this work, which at least partially arises from prior conversations with our colleague Miri Levin-Rozalis (2016, personal communication), we have come to understand a moral-political justification for CAE to be distinct from and yet overlapping with the other categories in obvious ways. For example, while representation is understood to be obligatory in a democratic sense, it may also be thought of in political terms even though it is not ideological per se (e.g., representative governance). Long ago, Mark and Shotland (1985) made the case for representation as a reason for engaging stakeholders in evaluation. In a different example, we might consider ethical justifications for involving indigenous peoples in evaluations of their own programs from a responsibility and recognition-of-differences perspective. Such considerations are part and parcel of post-colonial discourse in economics and philosophy. But such ethical justification could also overlap with epistemological considerations; for example, CAE could provide a bridge between indigenous and western ways of knowing in the joint production of evaluative knowledge (Chouinard & Cousins, 2007). Justification along these lines would draw from the philosophical category.

      What Does CAE Look Like in Practice?

      Previously we argued that three specific dimensions of form or process are fundamental to CAE in practice (Cousins, Donohue, & Bloom, 1996; Cousins & Whitmore, 1998). These dimensions are i) control of technical decision-making about the evaluation, ii) diversity among stakeholders selected for participation in the evaluation; and iii) depth of the participation of stakeholders along a continuum of methodological stages of the evaluation process. We considered each of these dimensions to operate like semantic differentials. That is to say, any given CAE at any given point in time could be rated on a scale of 1 to 5 depending on how the evaluation was taking shape. We can see each of the three scales in Box 2. We also made the claim that the three dimensions were orthogonal or independent of one another. In other words, in theory, ratings of a particular CAE project for each respective dimension are free to vary from 1 to 5, regardless of scores on the other dimensions.

      Box 2: Dimensions of Form in CAE Practice

      Figure 1 shows how the three dimensions can be used as a device to differentiate among CAE family members by plotting rating scores in three-dimensional space. Hypothetically, in the figure, we can see that practical and transformative participatory evaluation streams would be located in two different sectors of the device despite being quite similar on two of the three dimensions. The dimension on which they differ is diversity; typically, a wide range of stakeholders are actively involved in transformative participatory evaluations, whereas in practical participatory evaluation engagement with the knowledge production function is most often limited to primary users, those with a vested interest in the program and its evaluation. We can see also that conventional stakeholder-based evaluation is rated to be quite distinct from the other two hypothetical examples. In this approach, originally described by Bryk (1983), participating program community members are essentially in a consultative role: the evaluator tends to control decision-making about the evaluation and stakeholder participation in the knowledge production function is limited to such activities as helping to set evaluation objectives and perhaps interpreting findings.

      A figure shows dimensions of form in CAE practice as a three-dimensional image.Description

      Figure 1 ■ Dimensions of form in CAE (adapted from Cousins & Chouinard, 2012)

      This device can be used to describe what any given CAE family member looks like in practice at any given point in time. It is noteworthy that CAE projects evolve over time and can actually change according to one or more of these dimensions of form as the project progresses. For example, in a hypothetical empowerment evaluation where the evaluator starts out in the role of critical friend and/or facilitator, deferring to program community members the control of decision-making, he/she may need to take more of a directive role if the project bogs down with controversy and/or acrimony among participating stakeholders. Or in practical participatory evaluation, initial deep engagement with evaluation implementation on behalf of stakeholders may wane in the face of competing job demands; ultimately, responsibility for the implementation of the evaluation may defer to the evaluator. In retrospective ratings of CAE projects, however, it seems likely that rating scores would be more holistic, representing an aggregate or average for the project.

      A while back, we actually challenged the assumption that these three process dimensions were fundamental and toyed with a five-dimensional version of the framework that took into account stakeholders differential access to power and manageability (Cousins, 2005; Weaver & Cousins, 2004). Later, however, Daigneault and Jacob (2009) published a logical critique of the framework and concluded that, in fact, the three original dimensions should be considered fundamental. Consequently, we have once again embraced the three-dimensional framework in considering what CAE looks like in practice (Cousins & Chouinard, 2012).

      Why Do We Need Principles to Guide Practice?

      Why and How Are Principles Valuable?

      Effectiveness principles to guide

Скачать книгу