Archive for the ‘Uncategorized’ Category


2016 in Review – and we thought last year was in turmoil!

I guess I am an analyst at heart. I always find it interesting to look back at what one observed at a certain point in time and then to see what has happened since.  This year I am going to do that explicitly by revisiting observations we/I made about last year which were really sort of admonitions about the future.  I am then reporting on my perceived community progress since then.

1. New Government – Emphasis on Results

2015 Observation:  This year’s Canadian Fall election brought a new government with a clear and direct interest in ‘real change’, transparency and evidence-based policy and programs. This tone at the top cannot be understated – and the interest generated at all levels of public and related enterprise is palpable. A door has opened.

2016 Update:  There has been significant activity on this file since the new Canadian Government came into power. We may, however be facing a situation of rushing in some areas where we really need to think things through. The results and delivery methodology introduced to the Federal Government in February 2016 arguably needs a fair bit of adjustment to suit Canadian Federal Government circumstances. Using evidence to support decision-making means much more than developing more scorecards or simple minded delivery plans. After all – people have been trying to get this right for decades – world-wide. The good news is that Canada may have some of the answers in its results logic based approach – especially for the hard to measure areas. What we need to do is to make sure all relevant wisdom is brought to the table – even if this causes some minor delays in implementation.  (See Community of Practice discussion below.)

2.  Results Chain as a ‘Useful’ Basis for Measurement and Evaluation

2015 Observation:  The notion of using a results chain as a ‘useful’ basis for evaluation has emerged directly.  Most recently see Mayne, J. (2015). “Useful Theory of Change Models.” Canadian Journal of Program Evaluation 30(2): 119-142 and Mayne, J. and N. Johnson (2015). “Using Theories of Change in the Agriculture for Nutrition and Health CGIAR Research Program.” Evaluation 21(4): 407-428. These articles propose an approach we have been advocating for some time and create part of the conditions for real progress and maybe even transformation in our function.

2016 Update:  Experience continues to suggest that a results chain or results logic approach is a key to getting performance planning, measurement and evaluation ‘right’. At PPX we started a community of practice for science related areas (http://ppx.ca/en/exchange-communities-practice/) and it seems clear that the thoughtful, results logic-based approach developed across several Canadian provinces and some federal agencies over the past seven years (and de facto longer) is proving useful in helping to plan, analyze, report and manage science based policies, programs and initiatives.  There appears to be potential in a number of other areas as well.

3.  Program Delivery Arrangements Affect Results

2015 Observation:  In item 2 above, the notion that results chains apply to the set-up and delivery arrangements for a program every bit as much as the actual execution of the program has helped many of us to explain how and why certain programs continue to fail or persist in ‘bending’ in implementation. There are “Many a slip twixt cup and lip” as Shakespeare said. In many cases we can see that the delivery arrangements just don’t fit the value proposition (Theory of Change). For example, annual funding renewals and heightened renewal uncertainty in program areas requiring long term, certain and steady funding wreak havoc on management and often creates perverse effects. There are scores of other examples. For now suffice to say that there is some strong potential for cumulative learning here. If you have time – consider what we said about the Economic Action Plan and whether it actually created economic stimulus through its infrastructure funding in 2009.  http://www.pmn.net/wp-content/uploads/Why-Cash-for-Clunkers-Works.pdf.  Hopefully we can avoid similar folly ahead.

2016 Update:  Last year we used an old example regarding infrastructure spending (though it may be relevant again) on the Economic Action Plan to show how faulty implementation negated the benefit of a stimulus initiative. This year I have used a case example on Shared Services derived from a newspaper investigative analysis to suggest the same thing. See http://ottawacitizen.com/news/national/built-to-crash-the-ugly-sputtering-beginning-of-shared-services-and-how-politics-conspired-against-it. (Email me if you would like to see my presentation on this.)

Going forward – we will need to augment results and delivery plans with a recognition that the delivery governance, machinery and design have their own ‘theory’ to them (with their own critical success factors) and that we should include consideration of this in any analysis of performance. The new climate change agreement signed by federal and provincial actors is a case in point.  Given its flexibility, It is likely to represent an excellent learning opportunity to analyze governance, design and implementation as well as the relative merits of carbon taxation, vs. cap and trade, vs. regulatory or volunteer approaches. The fact is that groups such as the Auditor General of Ontario are already second-guessing the results logic in so many words. (See http://www.auditor.on.ca/en/content/annualreports/arreports/en16/v1_302en16.pdf. For many of us the review goes far beyond the bounds of what constitutes an audit.) The report does this without the benefit of what might have been a more fulsome examination of historical comparative evidence on the various elements of the implementation and theory of change in effect for Ontario’s cap and trade system.  In my view we need to do such analyses in a more systematic and structured fashion – as evaluations.

4. Archetypal Program Theories Inform Analysis

2015 Observation:  Following from the above – the notion that there can be archetypes of results logic or program theories for different types of programs is starting to ‘take’. We have seen that many funding programs operate on similar results logic and principles, as do many regulatory and information/advisory programs. With that in mid it stands to reason that there is much to be gained in learning what works (to what extent) with whom under what conditions and why – by starting from a synthesis of what we know about the key contextual factors that have allowed programs or policies showing similar patterns to work. This also allows us to develop appropriate monitoring, measurement and evaluation schemes, as well as to set appropriate benchmarks.

2016 Update:  Stay tuned re: the work of our science community of practice, as well as other work we have been doing vis a vis regulatory, corporate, policy and other types of carrot, stick, sermon or other policy instrument types. We are starting to see an extensive amount of common ground across groups using similar archetypes. The results may be transformational in terms of being able to cost-effectively review performance.

5.  Monitoring and Evaluation – a Collective Learning Approach

2015 Observation:  Finally – points 1-4, along with our immensely advanced recent ability to communicate quickly and easily with one another obviate the emerging view that monitoring and evaluation are part of a learning approach and that this works best as a team sport. Specialists and generalists, line managers and corporate reviewers, delivery agents and users/clients work best when they work together to understand the need, the relevance of a given policy or program or set of policies and programs to a given situation and context, how things are intended to work (and why we think they will), how they actually work in specific sets of conditions, with/for whom, why and what can be done about it.

2016 Update:  The team sport aspect of review work has begun to take hold in more areas over the past year. In our practice it is now typically standard to hold workshops on results logic among key stakeholders – at the beginning, during and sometimes after major PM+E projects and exercises. The question of “What works with/for whom (to what extent) in what conditions and why?” is emerging as a key to engaging diverse groups in a collective learning journey which can complement or sometimes even replace the adversarial ‘accountability’ modus operandi typical of conventional audit and many cost-cutting review approaches. (The realist approach which focusses on this question was probably one of the two most prominent approaches showcased at the European Evaluation Society conference in October.)  At the end of the day – our experience has been that we can improve negative findings acceptance if we have engaged key groups on a collective learning journey (and we have substantively reviewed evidence vs. the results logic of delivery arrangements and the enabling environment as well as outcomes.)

So we have had some evolutions on the themes expressed last year – however for me they all still ring true – some with more urgency than others. We look forward to working with many of you to help advance the file in these exciting and yet extremely challenging times in Canada and in our world.

 

 

How a Configurational Contextual Model ‘Trumped’ the Conventional Forecasters: Four I’s Proved to be Powerful Predictors in the 2016 Presidential Election

by Steve Montague

Groups like Pollyvote and Real Clear have been doing a fair bit of ex poste diagnosis of which models and approaches did worse than others in failing to predict the 2016 Donald Trump US Presidential election victory.  This article will not rehash those arguments other than to note that all sources have had to admit that what they have called a ‘configurational’ or ‘threshold’ model developed by a history professor name Lichtman outperformed the more elegant statistical and econometric models on offer.[1]

This has to be seen as a bit of a coup for analysts and evaluators who believe that causation can be thought of as the product of contextual factors playing on any mechanism or set of mechanisms at a given period of time.

The Lichtman Keys can be represented as statements that favor victory for the incumbent party. According to the model, when five or fewer statements are false, the incumbent party is predicted to win the popular vote; when six or more are false, the challenging party is predicted to win the popular vote.

  1. Party Mandate: After the midterm elections, the incumbent party holds more seats in the U.S. House of Representatives than after the previous midterm elections.
  2. Contest: There is no serious contest for the incumbent party nomination.
  3. Incumbency: The incumbent party candidate is the sitting president.
  4. Third party: There is no significant third party or independent campaign.
  5. Short term economy: The economy is not in recession during the election campaign.
  6. Long term economy: Real per capita economic growth during the term equals or exceeds mean growth during the previous two terms.
  7. Policy change: The incumbent administration effects major changes in national policy.
  8. Social unrest: There is no sustained social unrest during the term.
  9. Scandal: The incumbent administration is untainted by major scandal.
  10. Foreign/military failure: The incumbent administration suffers no major failure in foreign or military affairs.
  11. Foreign/military success: The incumbent administration achieves a major success in foreign or military affairs.
  12. Incumbent charisma: The incumbent party candidate is charismatic or a national hero.
  13. Challenger charisma: The challenging party candidate is not charismatic or a national hero.

For each of these Lichtman constructs what amounts to a ‘truth table’ (see Qualitative Comparative Analysis for an explanation of Truth Tables) which is essentially a set of either true or false (1,0) ratings – along with a certainty level – for each of the  13 factors. The evolution of these ratings over time leading up to the November 8th 2016 US election is a matter of public record. Lichtman now famously predicted a Trump victory as early as September.

What may be less obvious is the way that the Lichtman factors seem to cover what Pawson has called the four Is of context. These four Is include Infrastructural considerations, Institutional considerations Inter-relational considerations and Individual considerations[2]. Thinking about these categorizations is useful here because there has been a tendency to simplify Lichtman’s findings into concluding that the fate of an incumbent administration is completely ‘up to them’. [3] When you look at the four Is categories – you can see that the Lichtman ‘keys’ each fit in to at least one of the four I’s. See below:

1: Party Mandate Infrastructural
2: Contested Nomination Institutional
3: Incumbent Status Infrastructural
4: Third Party Challenge Infrastructural
5: Short-term Economy Infrastructural
6: Long-term Economy Infrastructural
7: National Policy Achievement/Shift Institutional
8: Social Unrest Infrastructural
9: Scandal Inter-relational / Individual
10: Foreign Policy Defeats Inter-relational
11: Foreign Policy Success Inter-relational
12: Incumbent Charisma Individual
13: Challenger Charisma Individual

 

These ratings are mine and certainly some could be disputed or rendered into more than one category. The important thing though – is that the range of contextual factors goes from the broad socio-economic and political – products of history and broad circumstances, through to decisions largely made by key institutional groups (like national policy shifts), through to inter-relational factors like foreign affairs policy wins or defeats through to individual characteristics – like the charisma of the incumbent and challenger leaders. So in fact, many of the factors influencing the selection of president are out of the hands of either the incumbent or the challenger.

When it comes to strategy then – the idea is to focus on things you can most easily influence.  If you are a challenger the implication is that one should focus on the factors which could be most easily interpreted as associated with a person and their relationships. Note that some of the main messages of the Trump campaign focused on factor 9. scandal (Clinton’s use of a non-government sanctioned email account) factor 10. foreign policy (in terms of trade deals being ‘horrible’, in addition to being soft on terror etc.) and factor 12. incumbent charisma/character (characterized by Trump as ‘Crooked Hillary’ and the accusation that she was somehow ‘low energy’).

So the Lichtman model prediction success should suggest two important things for analysts, researchers and evaluators:

  1. Context is critical to understanding outcomes; and,
  2. A configurational ‘cause-effect’ model (possibly ‘sorted’ by the four Is as shown here or by some other kind of contextual leveling which range from the broad to the specific) can help to explain (and predict) results.

In our recent work we have been working on what might be called check lists for factors that affect success in terms of policy and program implementation. For a related article see https://www.pmn.net/wp-content/uploads/Checklist-for-Context-and-Policy-Instruments.pdf. It appears that this latest development validates the further pursuit of such configurational approaches in various fields of application.  

Steve Montague (steve.montague@pmn.net) is a partner with PMN, a Fellow of the Canadian Evaluation Society and an adjunct professor at Carleton University in Ottawa, Canada.

[1]Lichtman, Allan J. (2008). The Keys to the White House: A Surefire Guide to Predicting the Next President (2008 ed.) New York, NY: Rowman and Littlefield Publishers The approach has predicted every Presidential election since 1984.

[2] Pawson, R. and Tilley, N. (1997) Realistic Evaluation Sage https://us.sagepub.com/en-us/nam/realistic-evaluation/book205276.  Pawson outlines a “four Is” framework as follows: Infrastructure (which refers to the wider social, economic, and cultural setting of a program/intervention); Institutional setting (the characteristics of the institution involved); Interpersonal relations (nature and history of key relationships); and, Individuals (characteristics and capacities of stakeholders).

[3] Lipman himself says this as follows “The principal historical lesson to be drawn from the Keys is that the fate of an incumbent administration rests largely in its own hands; there is little that the challenging party can do to affect the outcome of an election.” Lichtman, Allan J. The Keys to the Whitehouse op cite  

Infrastructure Spending Stimulus: Good in Economic Theory – A Fail in Political Economy Practice?

For every complex problem, there is a solution that is simple, neat, and wrong.
– HL Mencken

‘Canada’s national newspaper’ the Globe and Mail on January 16th 2016 (Folio infrastructure A8) quoted a ‘simple’ Finance Canada table showing that  for every dollar spent on infrastructure there is an increases in economic growth by  1.5 dollars. The article quotes the Department as concluding that  “infrastructure spending is the most effective form of stimulus compared to the other options,  including more generous Employment Insurance benefits or tax cuts.”   Private sector economists apparently agree with the idea that infrastructure spending stimulus would be significantly positive – boosting economic growth by an estimated half a percentage point.

The problem is that we have been here before, and the  evidence suggests that there is a fly in the ointment. In late 2010 Tim Kiladze of Canada’s national newspaper noted that Economic Action Plan put in place by the government of Canada in 2009 after the 2008 recession for mostly so-called ‘shovel ready’ infrastructure  funding was spent too late to be effective. In fact, it was suggested by some analysts he quotes in 2010 – that some infrastructure funding in 2009 was actually delayed while provinces and regions waited to see if their project qualified for federal assistance. See http://tinyurl.com/Infrbust. So the Economic Action Plan Infrastructure spending of 2009-10 appears to have potentially de-stimulated the economy in some areas during the Canadian Economy’s  time of greatest stimulus need! (The systems thinkers call this a classic ‘fix that fails’.). The real problem from my perspective  is that some of us predicted this. (See  http://tinyurl.com/ClunkvsInfr.)

How can we use evaluation expertise to help this time? The first step will be to recognize the problem here. I submit that the implementation design did not fit the need and the theory of change in 2008-09. In simple terms – there were too many ‘partners’ and intermediaries who needed to align to spend the cash. Multiple levels of government were involved, diverse communities and various civil society stakeholders and the media were weighing in on almost each and every investment. By contrast – the US and then Canadian equivalent of cash for clunkers used straight forward authorities (i.e. The programs featured direct federal government funding to intermediaries.) and motivated intermediaries who had a track record of speedy transaction promotion  (i.e. car salesmen) to push federal stimulus money out the door at record levels.  See http://tinyurl.com/ClunkvsInfr for a more detailed discussion and some admitted gloating about being right in my 2009 prediction that the Economic Action Plan infrastructure stimulus would not work.

Hopefully this time we can get our Canadian Federal Government to consider the linkage of the theory of change (i.e. the need to stimulate the economy through some transfers) with the implementation design. The evaluation and research community then needs to step up and to synthesize the conditions and contexts that will enable stimulus programs, such as those being contemplated right now, to succeed in the political economy of Canada. We need to help fill in the ‘gap’ that frequently exists between the policy theory and the implementation reality. In some of the latest thinking on the use of theories of change from Mayne (https://evaluationcanada.ca/useful-theory-change-models?f=n – see especially pages 120-122)  the point is made that program theory should be used for much more than the design of evaluation or measurement regimes. When it includes concepts like who one needs to reach and what early and intermediate changes need to occur – it  can be used to directly inform policy and programming. Isn’t it time  to get in front of the big decisions – so evaluators and evaluative thinking can help inform policies , programs and strategic decisions? For me that would be way more fun than toiling away on our studies in obscurity and/or continuously saying “I told you so” after the fact.

2015 in Review – Turmoil in the World – Transition for Planning, Monitoring and Evaluation?

2015 was one of some turmoil in world terms. It was also the international year of evaluation. For many of us in evaluation and performance monitoring practice in Canada however, there were actually some encouraging signs which have emerged this year. Here are five inter-related phenomena which bode well for performance planning, monitoring, evaluation and management going forward:

1)     This year’s Canadian Fall election brought a new government with a clear and direct interest in ‘real change’, transparency and evidence-based policy and programs. This tone at the top cannot be understated – and the interest generated at all levels of public and related enterprise is palpable. A door has opened.

2)     The notion of using a results chain as a ‘useful’ basis for evaluation has emerged directly.  Most recently see Mayne, J. (2015). “Useful Theory of Change Models.” Canadian Journal of Program Evaluation 30(2): 119-142 and Mayne, J. and N. Johnson (2015). “Using Theories of Change in the Agriculture for Nutrition and Health CGIAR Research Program.” Evaluation 21(4): 407-428. These articles propose an approach we have been advocating for some time and create part of the conditions for real progress and maybe even transformation in our function.

3)     In item 2 above, the notion that results chains apply to the set-up and delivery arrangements for a program every bit as much as the actual execution of the program has helped many of us to explain how and why certain programs continue to fail or persist in ‘bending’ in implementation. There are “Many a slip twixt cup and lip” as Shakespeare said. In many cases we can see that the delivery arrangements just don’t fit the value proposition (Theory of Change). For example, annual funding renewals and heightened renewal uncertainty in program areas requiring long term, certain and steady funding wreak havoc on management and often creates perverse effects. There are scores of other examples. For now suffice to say that there is some strong potential for cumulative learning here. If you have time – consider what we said about the Economic Action Plan and whether it actually created economic stimulus through its infrastructure funding in 2009.  https://www.pmn.net/wp-content/uploads/Why-Cash-for-Clunkers-Works.pdf Hopefully we can avoid similar folly ahead.

4)     Following from the above – the notion that there can be archetypes of results logic or program theories for different types of programs is starting to ‘take’. We have seen that many funding programs operate on similar results logic and principles, as do many regulatory and information / advisory programs. With that in mid it stands to reason that there is much to be gained in learning what works (to what extent) with whom under what conditions and why – by starting from a synthesis of what we know about the key contextual factors that have allowed programs or policies showing similar patterns to work. This also allows us to develop appropriate monitoring, measurement and evaluation schemes, as well as to set appropriate benchmarks.

5)     Finally – points 1-4, along with our immensely advanced recent ability to communicate quickly and easily with one another  obviate the emerging view that monitoring and evaluation are part of a learning approach and that this works best as a team sport. Specialists and generalists, line managers and corporate reviewers, delivery agents and users / clients work best when they work together to understand the need, the relevance of a given policy or program or set of policies and programs to a given situation and context, how things are intended to work (and why we think they will), how they actually work in specific sets of conditions, with / for whom, why and what can be done about it.

Steve Montague Wins Contribution to Evaluation in Canada Award

At the 2015 Canadian Evaluation Society National Conference in Montreal the Contribution to Evaluation in Canada award was presented to PMN partner Steve Montague. This award puts Steve into an elite group of Canadian evaluators who have been recognized through the Contribution to Evaluation Award and have been made a Fellow of the Society. This latest award makes it the fourth time that Steve has been recognized for his contribution to Canadian evaluation. In addition to the National Contribution to Evaluation award in 2015, and the CES fellowship in 2011, in 2003 Steve was awarded the Karl Boudreault Award for Leadership in Evaluation by the CES National Capital Region and while in Government in the 1980s he won a team merit award for a technology center evaluation.

2014 Year End Message: The critical link between ‘what we want’ and ‘how we get there’

The critical link between ‘what we want’ and ‘how we get there’

S. Montague, December 2014

There’s many a slip ‘twixt the cup and the lip (Old English proverb)

A key theme emerging from our practice and workshops this year – in Canada, Europe and Australia – has been the need to understand the critical link between the results we want from an initiative and the ways and means we design, authorize and deliver that initiative. As suggested in presentations this year (see for example Does Your Implementation Fit Your Theory of Change?[1]. Planning, review and management functions need to consider, categorize, research and then investigate the contexts and conditions that allow certain policy and program designs to work in given situations.

There are rewards from researching theories of change, theories of implementation – and laying them out before one starts measuring and assessing a program or policy. We have found benefits to include more relevant and streamlined reviews and evaluations, more practical plans, more valid measurement regimes and greater insight in reports. While these are well worth the effort in and of themselves, I believe that the most important benefit of an approach that lays out the basic theory of change and a linked theory of action or implementation is in fact better stakeholder engagement.

What is that you say? You are going to lay out elegant theories of a program that include both design and delivery elements, context and change theory and you are somehow going to engage regular people in the dialogue? (i.e. What? logic models are not just for nerds anymore?) The answer is yes – and it works. The latest evidence I would offer comes from the work of the students in our Carleton University Diploma in Public Policy and Evaluation this year.  They worked closely with clients in groups including a program assisting community social planning councils, an NGO educating people on how to be more participative citizens and a residential community group promoting the concept of ‘safe people’. In each case project teams participatively helped proponents and other stakeholders better understand what it was they were doing with whom and why – and then established both key success factors and means to measure progress and evaluate performance. These projects were collective learning journeys – and they were rewarding for all concerned. Better still – they have provided the basis for more generative learning going forward.

So part of the answer is to consider program theory in a participative fashion. The other part is to stop artificially separating so-called ‘impact’ studies and evaluations from so-called ‘process’ or formative evaluations (and while we are at it let’s stop creating separate fiefdoms for ‘performance audits’, SROI studies, ‘scorecard and dashboard analyses’, ‘quality reviews’ and other forms of performance reporting). The learning we have been gaining ‘on the ground’ suggests that processes (and even before processes, broader contextual factors, authorities and governance arrangements) profoundly affect ‘impact’ (i.e. the achievement of desired outcomes related to a policy or program objective). In much of our recent efforts we have been trying to look at performance in terms of systems and the interplay of differing actors. (See for example Telling the Healthy and Safe Food ‘Systems’ Story.) This has profound implications for how people see performance indicators, planning and reporting.

Given the above, a key part of transforming review functions into learning vehicles will be to recognize the linkages between processes and outcomes – or more directly – recognize the linkages and interplay among system stakeholders (program proponents being just one of them) in achieving objectives. This means looking at context and implementation characteristics (and the systems actors involved) as well as a chain of results for subjects, users and beneficiaries.

2015 is the international year of evaluation. Let us make it a goal to move evaluation beyond its current state in many organizations as a somewhat isolated review and accountability function, into a role where it can really promote strategic, tactical and operational learning. The insights gained by an approach which includes carefully researched analysis and synthesis of what we know has worked for whom under what conditions and why in the past, as well as through originally gathered evidence for the present can be proffered in a fashion which fundamentally engages all key stakeholders and provides for accountability while supporting collective learning for the future. Let’s help evaluation become a core management function and a main vehicle for evidence based policy and public administration.

 

 

 



[1] We got almost 200 people from across disciplines out on a Monday night in Melbourne for this session – proving the word ‘theory’ in the title is not an interest killer everywhere. (for a video of the event see https://www.youtube.com/watch?v=gKg6NCbD6KM&feature=youtu.be.

2013-14 Key Events

European Union (September 17-18, 2013)

The Canadian Experience Master class with Steve Montague. Steve presented a Master class to the Group of Resource Directors (GDR) of the European Commission Secretariat General on lessons from the implementation of results-based management (RBM) by the Canadian Federal Government. The overall purpose was to inform and inspire the Secretariat General’s thinking on how to improve the Commission’s own performance management and accountability framework (Activity-Based Management and Strategic Planning and Programming) and, in this context, to assess whether some elements of the RBM could be tailored to meet the Commission’s needs while matching its institutional set-up. Emphasis was on creating a shared purpose and learn from others to overcome barriers.

.

Upcoming 2014 Events

January 16, 2014

“A Systems Approach to Performance Planning, Monitoring and Evaluation in Complex Regulatory Environments: The Case of CFIA” For more information and registration click here.

2013 Seventeenth Annual Performance and Planning Exchange Symposium (May 13-15, 2013)

Training Session – RBM 101

A one-day, intensive course on Results-Based Management (RBM). The course covered the RBM fundamentals – from differentiating between activities, outputs, outcomes and indicators to using results for managing and reporting. Through discussion, presentations, a case study and in-class exercises, this course provided participants with a solid understanding of RBM planning and performance measurement and reporting principles and how to apply them in their environment. Participants gained a hands-on experience in building Logic Models / Outcomes Maps and Performance Measurement Frameworks.

Planning, Monitoring and Evaluating Program Impacts and ‘What works’ in Complex Environments: Emerging Canadian and International Practice

This session addressed what looks like overwhelming challenges to determining public policy and program impacts by suggesting that a non-conventional approach needs to be taken. The presenters drew on guidance from Treasury Board Secretariat, international practice and leading practitioners to suggest and demonstrate alternative approaches to determining program impacts. Analysis of need, context, theories of change and alternative measurement and evaluation approaches were explored. The session was appropriate for all levels of managers and analysts who are required to plan, monitor, evaluate, report and/or manage in complicated or complex management and public policy environments. Some basic principles for establishing expectations, policy or program value propositions, and then monitoring and evaluating initiatives in conditions of complexity were explored in both presentation and case-based work. Participants took away a set of key principles, some knowledge of emerging analytical tools and approaches, and tailored collective learning with regard to how to tell an appropriate performance story for key areas of public policy and administration.

Canadian Evaluation Society National Annual Conference 2013 (June 10, 2013)

A presentation on What works? How evaluation can cross boundaries to influence public policy and management. Evaluation has languished behind other review efforts in terms of influencing public policy and administration. This has occurred despite the fact that evaluation is designed to address fundamental questions. The session suggested that evaluation efforts needed to adopt a realistic and pragmatic approach to help public and NGO decision makers to learn and understand what works (to what extent) for whom in what conditions and why in terms of initiatives. The session demonstrated and collectively examined what we know about different policy instruments (e.g. carrots, sticks and sermons etc) applied in different areas (e.g. industrial innovation, food safety, public health and energy) and delivered via different implementation designs (e.g. single agency delivery, delivery partnerships, contribution to intermediaries etc.). The demonstration both illustrated findings which participants can use in their practice, but also demonstrated an approach to using theory-based approaches and evaluative research and studies to generatively learn about the influence of policies and programs on results and the important factors to consider when planning and implementing initiatives.

World Bank (June 28-29, 2013)

A two day logic models in development evaluations training workshop for the World Bank-Carleton University at the International Program for Development (IPDET) Evaluation Training 2013. The workshop offered an opportunity to explore logic models in more detail. It took a practical, step-by-step approach to developing traditional program logic models and offered innovative strategies for depicting the complexity involved in many of today’s development initiatives. Participants worked on a series of individual and small group exercises, had ample time to ask questions, considered plenty of practical examples, and were encouraged to share their own experiences.

American Evaluation Association 2013 Conference (October 19, 2013)

A presentation on Realistic Contribution Analysis. Approaches such as realistic evaluation (Pawson and Tilley 1997) and theory of change based approaches like contribution analysis (Mayne 1999) have typically been seen as distinctly different from each other. Furthermore, theory based approaches have sometimes been distinguished as a separate and distinct grouping from participative approaches for complicated, complex and ‘small ‘n’ evaluations (White & Phillips 2012). The presentation described the application of what might be called realist contribution analysis where the context, mechanisms and outcomes (CMO) framework which represents the essence of realist evaluation have been built into a theory of change contribution analysis approach which is developed, validated and applied as a participative process. Examples were drawn to illustrate how this approach has been applied in policy initiatives, regulatory functions, codes and standards applications and direct assistance programs. The approach was shown as a critical front end to cost-effectively scoping evaluations, a valid means to frame study designs and perhaps most importantly, as a way to engage decision-makers in unique dialogue such that evaluation use is greatly enhanced.

Community of Federal Regulators Annual National Workshop 2013 (November 4, 2013)

Steve presented together with CFIA department representatives on the CFIA Performance Measurement Reform: A System-Based Approach. This presentation discussed the implementation of changes to the department’s organization and regulations. This “modernization” includes a more unified inspection approach which recognizes that there are many participants in the food safety system who influence the ultimate outcome of protecting Canadians from preventable health risks.

Performance and Planning Exchange (December 3, 2013)

A presentation on the 2010 Public Accounts Committee report on DPRs, providing a perspective on the needs of parliamentarians. Participants were given the tools to carry out the rating process, and had coaching throughout. Rating criteria were drawn from the Guide to Preparing Public Performance Reports developed by the Public Sector Accounting Board. This learning event was a stimulating exercise, providing a renewed appreciation for the art and science of performance reporting.

 

A systems approach in complex regulatory environments

The reduction or mitigation of risks or harms is a major priority of the Government of Canada in matters related to Canadians’ health, safety, security and environmental protection and even economic ‘harms’ such as fraud. While the goals are simple, the situations are usually complex, featuring multiple actors, dynamic environments and highly variable circumstances. For the full article by Steve Montague click here.

 An upcoming PPX breakfast learning event planned for January 16, 2014 will further explore the ideas expressed in this article. For details and registration click here http://www.ppx.ca/LearningEvents.shtml.

PMN Key Publications and Presentations in 2012

Theory-based Approaches presented at the CES Annual Learning Event

Presentation at Addictions Ontario Conference

Health Charities Coalition

Keynote Presentation to the Canadian Evaluation Society National Conference in Halifax

Reach And How It Can Be Used to Improve Theories of Change

Advocacy Evaluation Theory as a Tool for Strategic Conversation: A 25-Year Review of Tobacco Control Advocacy at the Canadian Cancer Society

Applications of Contribution Analysis to Outcome Planning and Impact Evaluation