2015 in Review – Turmoil in the World – Transition for Planning, Monitoring and Evaluation?
December 22, 2015
2015 was one of some turmoil in world terms. It was also the international year of evaluation. For many of us in evaluation and performance monitoring practice in Canada however, there were actually some encouraging signs which have emerged this year. Here are five inter-related phenomena which bode well for performance planning, monitoring, evaluation and management going forward:
1) This year’s Canadian Fall election brought a new government with a clear and direct interest in ‘real change’, transparency and evidence-based policy and programs. This tone at the top cannot be understated – and the interest generated at all levels of public and related enterprise is palpable. A door has opened.
2) The notion of using a results chain as a ‘useful’ basis for evaluation has emerged directly. Most recently see Mayne, J. (2015). “Useful Theory of Change Models.” Canadian Journal of Program Evaluation 30(2): 119-142 and Mayne, J. and N. Johnson (2015). “Using Theories of Change in the Agriculture for Nutrition and Health CGIAR Research Program.” Evaluation 21(4): 407-428. These articles propose an approach we have been advocating for some time and create part of the conditions for real progress and maybe even transformation in our function.
3) In item 2 above, the notion that results chains apply to the set-up and delivery arrangements for a program every bit as much as the actual execution of the program has helped many of us to explain how and why certain programs continue to fail or persist in ‘bending’ in implementation. There are “Many a slip twixt cup and lip” as Shakespeare said. In many cases we can see that the delivery arrangements just don’t fit the value proposition (Theory of Change). For example, annual funding renewals and heightened renewal uncertainty in program areas requiring long term, certain and steady funding wreak havoc on management and often creates perverse effects. There are scores of other examples. For now suffice to say that there is some strong potential for cumulative learning here. If you have time – consider what we said about the Economic Action Plan and whether it actually created economic stimulus through its infrastructure funding in 2009. https://www.pmn.net/wp-content/uploads/Why-Cash-for-Clunkers-Works.pdf Hopefully we can avoid similar folly ahead.
4) Following from the above – the notion that there can be archetypes of results logic or program theories for different types of programs is starting to ‘take’. We have seen that many funding programs operate on similar results logic and principles, as do many regulatory and information / advisory programs. With that in mid it stands to reason that there is much to be gained in learning what works (to what extent) with whom under what conditions and why – by starting from a synthesis of what we know about the key contextual factors that have allowed programs or policies showing similar patterns to work. This also allows us to develop appropriate monitoring, measurement and evaluation schemes, as well as to set appropriate benchmarks.
5) Finally – points 1-4, along with our immensely advanced recent ability to communicate quickly and easily with one another obviate the emerging view that monitoring and evaluation are part of a learning approach and that this works best as a team sport. Specialists and generalists, line managers and corporate reviewers, delivery agents and users / clients work best when they work together to understand the need, the relevance of a given policy or program or set of policies and programs to a given situation and context, how things are intended to work (and why we think they will), how they actually work in specific sets of conditions, with / for whom, why and what can be done about it.