Archive for December, 2016


2016 in Review – and we thought last year was in turmoil!

I guess I am an analyst at heart. I always find it interesting to look back at what one observed at a certain point in time and then to see what has happened since.  This year I am going to do that explicitly by revisiting observations we/I made about last year which were really sort of admonitions about the future.  I am then reporting on my perceived community progress since then.

1. New Government – Emphasis on Results

2015 Observation:  This year’s Canadian Fall election brought a new government with a clear and direct interest in ‘real change’, transparency and evidence-based policy and programs. This tone at the top cannot be understated – and the interest generated at all levels of public and related enterprise is palpable. A door has opened.

2016 Update:  There has been significant activity on this file since the new Canadian Government came into power. We may, however be facing a situation of rushing in some areas where we really need to think things through. The results and delivery methodology introduced to the Federal Government in February 2016 arguably needs a fair bit of adjustment to suit Canadian Federal Government circumstances. Using evidence to support decision-making means much more than developing more scorecards or simple minded delivery plans. After all – people have been trying to get this right for decades – world-wide. The good news is that Canada may have some of the answers in its results logic based approach – especially for the hard to measure areas. What we need to do is to make sure all relevant wisdom is brought to the table – even if this causes some minor delays in implementation.  (See Community of Practice discussion below.)

2.  Results Chain as a ‘Useful’ Basis for Measurement and Evaluation

2015 Observation:  The notion of using a results chain as a ‘useful’ basis for evaluation has emerged directly.  Most recently see Mayne, J. (2015). “Useful Theory of Change Models.” Canadian Journal of Program Evaluation 30(2): 119-142 and Mayne, J. and N. Johnson (2015). “Using Theories of Change in the Agriculture for Nutrition and Health CGIAR Research Program.” Evaluation 21(4): 407-428. These articles propose an approach we have been advocating for some time and create part of the conditions for real progress and maybe even transformation in our function.

2016 Update:  Experience continues to suggest that a results chain or results logic approach is a key to getting performance planning, measurement and evaluation ‘right’. At PPX we started a community of practice for science related areas (http://ppx.ca/en/exchange-communities-practice/) and it seems clear that the thoughtful, results logic-based approach developed across several Canadian provinces and some federal agencies over the past seven years (and de facto longer) is proving useful in helping to plan, analyze, report and manage science based policies, programs and initiatives.  There appears to be potential in a number of other areas as well.

3.  Program Delivery Arrangements Affect Results

2015 Observation:  In item 2 above, the notion that results chains apply to the set-up and delivery arrangements for a program every bit as much as the actual execution of the program has helped many of us to explain how and why certain programs continue to fail or persist in ‘bending’ in implementation. There are “Many a slip twixt cup and lip” as Shakespeare said. In many cases we can see that the delivery arrangements just don’t fit the value proposition (Theory of Change). For example, annual funding renewals and heightened renewal uncertainty in program areas requiring long term, certain and steady funding wreak havoc on management and often creates perverse effects. There are scores of other examples. For now suffice to say that there is some strong potential for cumulative learning here. If you have time – consider what we said about the Economic Action Plan and whether it actually created economic stimulus through its infrastructure funding in 2009.  http://www.pmn.net/wp-content/uploads/Why-Cash-for-Clunkers-Works.pdf.  Hopefully we can avoid similar folly ahead.

2016 Update:  Last year we used an old example regarding infrastructure spending (though it may be relevant again) on the Economic Action Plan to show how faulty implementation negated the benefit of a stimulus initiative. This year I have used a case example on Shared Services derived from a newspaper investigative analysis to suggest the same thing. See http://ottawacitizen.com/news/national/built-to-crash-the-ugly-sputtering-beginning-of-shared-services-and-how-politics-conspired-against-it. (Email me if you would like to see my presentation on this.)

Going forward – we will need to augment results and delivery plans with a recognition that the delivery governance, machinery and design have their own ‘theory’ to them (with their own critical success factors) and that we should include consideration of this in any analysis of performance. The new climate change agreement signed by federal and provincial actors is a case in point.  Given its flexibility, It is likely to represent an excellent learning opportunity to analyze governance, design and implementation as well as the relative merits of carbon taxation, vs. cap and trade, vs. regulatory or volunteer approaches. The fact is that groups such as the Auditor General of Ontario are already second-guessing the results logic in so many words. (See http://www.auditor.on.ca/en/content/annualreports/arreports/en16/v1_302en16.pdf. For many of us the review goes far beyond the bounds of what constitutes an audit.) The report does this without the benefit of what might have been a more fulsome examination of historical comparative evidence on the various elements of the implementation and theory of change in effect for Ontario’s cap and trade system.  In my view we need to do such analyses in a more systematic and structured fashion – as evaluations.

4. Archetypal Program Theories Inform Analysis

2015 Observation:  Following from the above – the notion that there can be archetypes of results logic or program theories for different types of programs is starting to ‘take’. We have seen that many funding programs operate on similar results logic and principles, as do many regulatory and information/advisory programs. With that in mid it stands to reason that there is much to be gained in learning what works (to what extent) with whom under what conditions and why – by starting from a synthesis of what we know about the key contextual factors that have allowed programs or policies showing similar patterns to work. This also allows us to develop appropriate monitoring, measurement and evaluation schemes, as well as to set appropriate benchmarks.

2016 Update:  Stay tuned re: the work of our science community of practice, as well as other work we have been doing vis a vis regulatory, corporate, policy and other types of carrot, stick, sermon or other policy instrument types. We are starting to see an extensive amount of common ground across groups using similar archetypes. The results may be transformational in terms of being able to cost-effectively review performance.

5.  Monitoring and Evaluation – a Collective Learning Approach

2015 Observation:  Finally – points 1-4, along with our immensely advanced recent ability to communicate quickly and easily with one another obviate the emerging view that monitoring and evaluation are part of a learning approach and that this works best as a team sport. Specialists and generalists, line managers and corporate reviewers, delivery agents and users/clients work best when they work together to understand the need, the relevance of a given policy or program or set of policies and programs to a given situation and context, how things are intended to work (and why we think they will), how they actually work in specific sets of conditions, with/for whom, why and what can be done about it.

2016 Update:  The team sport aspect of review work has begun to take hold in more areas over the past year. In our practice it is now typically standard to hold workshops on results logic among key stakeholders – at the beginning, during and sometimes after major PM+E projects and exercises. The question of “What works with/for whom (to what extent) in what conditions and why?” is emerging as a key to engaging diverse groups in a collective learning journey which can complement or sometimes even replace the adversarial ‘accountability’ modus operandi typical of conventional audit and many cost-cutting review approaches. (The realist approach which focusses on this question was probably one of the two most prominent approaches showcased at the European Evaluation Society conference in October.)  At the end of the day – our experience has been that we can improve negative findings acceptance if we have engaged key groups on a collective learning journey (and we have substantively reviewed evidence vs. the results logic of delivery arrangements and the enabling environment as well as outcomes.)

So we have had some evolutions on the themes expressed last year – however for me they all still ring true – some with more urgency than others. We look forward to working with many of you to help advance the file in these exciting and yet extremely challenging times in Canada and in our world.