2014 Year End Message: The critical link between ‘what we want’ and ‘how we get there’

The critical link between ‘what we want’ and ‘how we get there’

S. Montague, December 2014

There’s many a slip ‘twixt the cup and the lip (Old English proverb)

A key theme emerging from our practice and workshops this year – in Canada, Europe and Australia – has been the need to understand the critical link between the results we want from an initiative and the ways and means we design, authorize and deliver that initiative. As suggested in presentations this year (see for example Does Your Implementation Fit Your Theory of Change?[1]. Planning, review and management functions need to consider, categorize, research and then investigate the contexts and conditions that allow certain policy and program designs to work in given situations.

There are rewards from researching theories of change, theories of implementation – and laying them out before one starts measuring and assessing a program or policy. We have found benefits to include more relevant and streamlined reviews and evaluations, more practical plans, more valid measurement regimes and greater insight in reports. While these are well worth the effort in and of themselves, I believe that the most important benefit of an approach that lays out the basic theory of change and a linked theory of action or implementation is in fact better stakeholder engagement.

What is that you say? You are going to lay out elegant theories of a program that include both design and delivery elements, context and change theory and you are somehow going to engage regular people in the dialogue? (i.e. What? logic models are not just for nerds anymore?) The answer is yes – and it works. The latest evidence I would offer comes from the work of the students in our Carleton University Diploma in Public Policy and Evaluation this year.  They worked closely with clients in groups including a program assisting community social planning councils, an NGO educating people on how to be more participative citizens and a residential community group promoting the concept of ‘safe people’. In each case project teams participatively helped proponents and other stakeholders better understand what it was they were doing with whom and why – and then established both key success factors and means to measure progress and evaluate performance. These projects were collective learning journeys – and they were rewarding for all concerned. Better still – they have provided the basis for more generative learning going forward.

So part of the answer is to consider program theory in a participative fashion. The other part is to stop artificially separating so-called ‘impact’ studies and evaluations from so-called ‘process’ or formative evaluations (and while we are at it let’s stop creating separate fiefdoms for ‘performance audits’, SROI studies, ‘scorecard and dashboard analyses’, ‘quality reviews’ and other forms of performance reporting). The learning we have been gaining ‘on the ground’ suggests that processes (and even before processes, broader contextual factors, authorities and governance arrangements) profoundly affect ‘impact’ (i.e. the achievement of desired outcomes related to a policy or program objective). In much of our recent efforts we have been trying to look at performance in terms of systems and the interplay of differing actors. (See for example Telling the Healthy and Safe Food ‘Systems’ Story.) This has profound implications for how people see performance indicators, planning and reporting.

Given the above, a key part of transforming review functions into learning vehicles will be to recognize the linkages between processes and outcomes – or more directly – recognize the linkages and interplay among system stakeholders (program proponents being just one of them) in achieving objectives. This means looking at context and implementation characteristics (and the systems actors involved) as well as a chain of results for subjects, users and beneficiaries.

2015 is the international year of evaluation. Let us make it a goal to move evaluation beyond its current state in many organizations as a somewhat isolated review and accountability function, into a role where it can really promote strategic, tactical and operational learning. The insights gained by an approach which includes carefully researched analysis and synthesis of what we know has worked for whom under what conditions and why in the past, as well as through originally gathered evidence for the present can be proffered in a fashion which fundamentally engages all key stakeholders and provides for accountability while supporting collective learning for the future. Let’s help evaluation become a core management function and a main vehicle for evidence based policy and public administration.

 

 

 



[1] We got almost 200 people from across disciplines out on a Monday night in Melbourne for this session – proving the word ‘theory’ in the title is not an interest killer everywhere. (for a video of the event see https://www.youtube.com/watch?v=gKg6NCbD6KM&feature=youtu.be.