Collaborative Approaches to Evaluation. Группа авторов

Чтение книги онлайн.

Читать онлайн книгу Collaborative Approaches to Evaluation - Группа авторов страница 10

Collaborative Approaches to Evaluation - Группа авторов Evaluation in Practice Series

Скачать книгу

developing visual representations in relation to CAE. First, we would argue that the CAE family members are properly thought of as approaches and not necessarily models or theories (see, e.g., Cousins, 2013). Visual representation of practical and transformative participatory approaches runs the risk of unintentionally framing them more as prescriptive models or prototypes than the fluid, context-sensitive approaches that are intended. We hasten to acknowledge Alkin’s point (2011, personal communication) that evaluation theories represented visually are ideals, and their application in practice will be very much influenced by context. Alkin also subscribes quite directly to the notion of the thorough analysis of the organizational, community, and political context of a program as being essential evaluation practice (e.g., Alkin & Vo, 2018).

      A figure expresses the essential features of CAE.Description

      Figure 2 ■ Essential features of CAE (adapted from Cousins et al., 2013)

      The importance of context cannot be understated, and that is why the systematic analysis of contextual exigencies before deciding the purpose and form of CAE is critical. As we have represented in Figure 2, program context is an ever-present filter through which subsequent activities and decisions flow. Essentially, context defines what we do, why we do what we do, how, and even the methods that we use. Borrowing from Snowden and Boone’s (2007) Cynefin framework, we previously argued that contexts can vary from simple to complicated, to complex, and even to chaotic situations (Cousins et al., 2013). Simple contexts are relatively predictable and controlled and cause-and-effect relationships well understood. In such cases, identified best practices may be warranted as solutions to important problems. In complicated contexts perhaps more than one alternative solution would be worthy of consideration, yet in complex situations where a high degree of uncertainty and unpredictability exists, cause-and-effect may be unknowable in advance. In fact, context-specific approaches that emerge in practice may be the best course of action. Finally, uncertainty may be so extreme and turbulent that cause-and-effect relationships are ultimately unknowable. Each of these program contexts is unique in some sense and would require differentiated approaches to program evaluation, particularly CAE. It is imperative therefore that contextual exigencies are well understood before deciding what CAE looks like and what can be expected to accomplish. This being the case, we are heartened by the recent contribution by Vo and Christie (2015) who developed a conceptual framework to support RoE focused on evaluation context.

      Context is at the center of all three of the justifications for developing the principles to guide CAE practice described above. With the emergence of a wide range of family members and increasing enthusiasm for using the CAE around the globe, it is essential to understand the implications of cultural and sociogeographic situations. Although there is some merit in compartmentalizing different approaches to CAE, we must guard against evaluators identifying with specific approaches and therefore being consciously or unconsciously drawn toward implementing them in situations that are not ideal. Finally, will the visual representation of theory inadvertently diminish the centrality and importance of contextual analysis? For all of the foregoing warrants and on the basis of privileging context, we argue that it is now prudent and necessary to develop a set of effectiveness principles to guide CAE practice. In the next section we describe the systematic, empirical approach to the problem that we took and the initial set of principles that we developed and validated.

      Evidence-based Principles to Guide CAE Practice

      Systematic Approach

      It will come as no surprise to those familiar with our work that the approach to the development of CAE principles that we took was empirical. We have long supported the concept of RoE, having identified it as an underdeveloped yet increasingly important gap in our field (e.g., Cousins & Chouinard, 2012). Through systematic inquiry, we sought to tap into this domain of evaluation practice to understand what characterizes or describes effective work and differentiates it from practice that is less so. Other approaches to principle development have been heavily grounded in practice and relied on the experience of renowned experts in the domain (e.g., DE principles, Patton, 2011) or based on fairly intensive consultative, deliberative processes (e.g., empowerment evaluation principles, Fetterman & Wandersman, 2005). In both instances, proponents draw heavily from practical wisdom. Our intention was to do the same but to do so through a rather significant data collection exercise.

      Themes (reasons) emerged through an analysis of the qualitative responses, and these provided the basis for our development of higher-order themes (contributing factors) and ultimately draft principles. Some themes we considered to be particularly critical because they represented both a reason why a given project was perceived to have been highly successful, but also why, in a separate instance, it was perceived to have been limiting. For example, for a hypothetical CAE that had ample resources, this factor may have contributed substantially to success. Conversely, in another project, a lack of resources may have been limiting and intrusive. We called these critical factors. Ultimately, we generated a set of eight principles and then asked 280 volunteer members of our sample to look over the 43-page draft as part of a validation exercise. Given the enormity of this task (realistically, requiring at least a half day), we greatly appreciated the generosity of the 50 participants who responded.

      Based on the feedback, we made a range of changes to the wording and characteristics of the draft principles and developed the final version of the preliminary set, subsequently published in the American Journal of Evaluation (Shulha, Whitmore, Cousins, Gilbert, & Al Hudib, 2016).

      Description of the CAE Principles

      Figure 3 provides an overview of the set of eight CAE principles resulting from our validation process. There are at least four important considerations to bear in mind in thinking about this set. First, the set is to be thought of as a whole, not as pick-and-choose menu. This aligns with the point made above that each and every principle in the set, if followed, is expected to contribute toward the desired outcome, that is, a successful CAE project. It is therefore possible for evaluation practitioners to follow each of the principles without risk of confusing or confounding purposes. The extent to which each principle is followed or weighted will depend on context and the presenting information needs. A second consideration is associated with the individual principles being differentially shaded and yet separated

Скачать книгу