Collaborative Approaches to Evaluation. Группа авторов

Чтение книги онлайн.

Читать онлайн книгу Collaborative Approaches to Evaluation - Группа авторов страница 12

Collaborative Approaches to Evaluation - Группа авторов Evaluation in Practice Series

Скачать книгу

involved, evaluators should ask what aspects of their normal routine will be removed from their list of responsibilities during the evaluation. This would be one way to set appropriate expectations. Evaluators need to monitor stakeholder engagement and perhaps develop strategies to motivate staff. Such engagement can be eroded by emerging conditions within the evaluation context. Another aspect of interest is the skill set that stakeholder participants bring to the project and the extent to which evaluators can help to match skills and interests to the tasks at hand. Program and organizational stakeholders are also a key resource for program content and contextual knowledge. “The evaluator was not an expert in the program content area and absolutely needed stakeholders to provide clarity about how the data would be used and what the boundary conditions were for asking questions of intended beneficiaries” (study participant).

      Monitor Evaluation Progress and Quality (evaluation design, data collection): Just as program and organizational stakeholders can help evaluators to understand local contextual exigencies that bear upon the program being evaluated, there is a significant role for evaluators in contributing to the partnership. The principle underscores the critical importance of data quality assurance and the maintenance of professional standards of evaluation practice. One aspect of the role concerns evaluation designs and ensuring that any adjustments preserve design integrity and data quality. Such adjustments may be necessary in the face of changes in the evaluation context. Acknowledging and sometimes confronting one another with deteriorating lack of fit between the intended evaluation design and the capacity of the collaboration to implement it can be productive and critical to salvaging evaluation efforts. Challenges with data collection are particularly salient and critical to ensuring data quality. It is essential for evaluators not to assume that stakeholders are appreciative of the implications of data quality on findings and outcomes, as the following excerpt suggests: “Front-line staff, who are responsible for collecting the data, did not understand the importance of getting it collected accurately.” Given the instructional role for evaluators, it is a worthwhile consideration to build in funding for such professional development processes. Such attention may reduce the amount of monitoring necessary as the project unfolds and can go a long way toward preserving the integrity of the evaluation.

      Promote Evaluative Thinking (inquiry orientation, focus on learning): The principle inspires the active and conscious development of an organizational culture of appreciation for evaluation and its power to leverage social change. Evaluative thinking is an attitude of inquisitiveness and belief in the value of evidence, and CAE provides good opportunity for developing such. When evaluative thinking is enhanced through collaboration, evaluation processes and findings become more meaningful to stakeholders, more useful to different decision makers, and more organizationally effective. The development of an inquiry orientation is an organizational culture issue and will not happen overnight, but certainly evaluators can profitably embrace a promotional stance as evaluation unfolds. Significant energy may be well spent helping collaborators to become invested in the learning process and to be prepared for the unexpected. In essence, evaluators would do well to be opportunistic in this respect, as the following excerpts suggest: “Because of the stakeholder commitment, results were used as an opportunity to learn and grow;” “stakeholders were willing to accept negative or contrary results without killing the messenger.” Organizational and program stakeholders who embrace the learning function of evaluation will have greater ownership and will be less likely to view it as something for someone else to do.

      Follow Through to Realize Use (practical outcomes, transformative outcomes): To what extent is the evaluation a valuable learning experience for the stakeholder participants? The principle promotes the conscious consideration of the potential for learning, capacity building, and other practical and transformative consequences of the evaluation. Implicated are evaluation processes and findings, as well as the evaluator’s role in facilitating these desirable outcomes. Practical outcomes at the organizational level influence program, policy, and structural decision-making, and they are seen through a change in disposition toward the program or evaluation and the development of program skills, including systematic evaluative inquiry. To the extent that stakeholders are directly engaged with knowledge production, the evaluation will have greater success in getting a serious hearing when program decisions are made. Transformative outcomes reflect change in the way organizations and individuals view the construction of knowledge and in the distribution and use of power and control. Enhanced independence and democratic capacities are the sorts of social change that could be labelled transformative. Working collaboratively can deepen the sense of community among stakeholders and enhance their empathy toward intended beneficiaries through the development of their understanding of complex problems. Transformational outcomes are more likely when the facilitating evaluator is skillful in promoting inquiry and has expertise in human and social dynamics. Being prepared to work toward transformational outcomes almost certainly means being prepared to work in contexts where there are differences and even conflict. Given the interplay between practical and transformative outcomes, evaluators working on CAE would be wise to negotiate with stakeholders about i) the range of possible outcomes given the scope of the evaluation, ii) the outcomes most worthy of purposeful attention, and iii) how joint efforts might best facilitate these outcomes.

      The foregoing description of the principles provides a good overview to support the development and implementation of CAE. The principles are grounded in the rich experiences of a significant number of practicing evaluators. Their credibility is enhanced by virtue of the comparative design we used to generate the evidence base as well as the validation exercise described above. In his recent book on principle-based evaluation, Patton (2017) explicitly acknowledged their quality: “For excellence in the systematic and rigorous development of a set of principles, I know of no better example than the principles for use in guiding collaborative approaches to evaluation” (p. 299).

      But in and of themselves, mere descriptions of the principles remain somewhat abstract. In order to enhance their practical value to guide CAE decision-making and reflection, we developed for each principle summary statements of evaluator actions and principle indicators in the form of questions that could be posed as an evaluation project is being planned or implemented. This information is summarized in Table 1 and was included in an indicator document to complement descriptions of the principles and their supportive factors.

      The actions and indicator questions provided in the Table (and in the indicator document) have not been subjected to any formal review or validation. They are the result of our own collective reflections on CAE and are therefore indirectly based on knowledge garnered through working with the base data set. Nevertheless, we offered these processes and indicators as a way for potential users of the CAE principles to apply them in practice. Notable among the suggested actions for evaluators to consider in order to follow or apply the principles, a range of interpersonal and soft skills would be required. These would include facilitation, negotiation, promotion, and monitoring. Such skills, we would argue, come through considerable practical experience; they are not likely to be easily picked up in courses or workshops.

      Having provided a summary overview of the set of eight effectiveness principles for CAE, and associated actions and indicators, we now turn to considerations about how these principles may be applied to the benefit of evaluators, program and organizational stakeholders, and in the evaluation community at large.

      Envisioned Uses and Applications of CAE Principles

      In our view, a range of possibilities

Скачать книгу