Just out: 2019 Special Edition of the Canadian Journal of Program Evaluation

The new special edition of the Canadian Journal of Program Evaluation highlights the implementation ‘fit’ in theory of change as well as actor based results logic (& theory).

Check out both articles here on the Canadian Evaluation Society website.

 

Fall 2018 Updates

  1. In the works – A knowledge take-up and use performance measurement tool. For an update – ask us.

 

2. The Fall 2018 European Evaluation Society – in Thessaloniki Greece: Great location, great content. For more on the content ask us.

 

3. Can you meaningfully target results in public administration? (hint: Yes you  can – see response to ‘Does Deliverology Matter?’)

A Time For Key Principles: What the UN has learned

S. Montague, December 2017

This year in the circles we have been working in there has been a strong emphasis on implementing the Canadian Results Policy of 2016. The policy has a few new tracking (tagging) of data elements to it and some structural requirements that are slightly new – but otherwise it can be seen as similar to past managing for results and results based management initiatives instigated by the Government of Canada and others over the past two decades (and in some senses much longer). The Policy essentially focusses on the idea that public administration – including the political levels – should focus on the  results of policies and programs – rather than input levels, activities and outputs. This is of course easier said than done anywhere – let alone in large bureaucracies.

Some of us have been recently involved in both consulting assignments and some capacity building efforts over the past year which have provided us with a firsthand perspective on  the issues in terms of the system’s ability to ‘gear-up’ to manage for results. Rather than list the issues, the gaps and the factors influencing them which exist at many levels in the system, it might be useful to instead look for inspiration from the results of other related efforts to review and assess the state of play in terms of a public administration  ‘results’ movement.

The United Nations  took on the idea of results based management (RBM) around the turn of the twenty-first century. Canadians like Perrin and Mayne clearly influenced some of the early thinking and their ideas helped shape some of the early guidance.  There have been a number of reviews conducted of RBM  over the past decade and a half – all of which have been somewhat critical of the key elements of the  movement – the idea that RBM goes beyond technical aspects of defining performance, measuring / monitoring / evaluating and reporting on results. The idea is that management would use results and performance information to manage. Unfortunately a recent survey of UN managers suggests that the vast majority did not see evidence based decision making more than ‘occasionally’. Even worse – only a small fraction believed performance analysis was done ‘honestly’.

Having done assignments for two UN organizations over the past three years and having taught a number of UN (and other) staff and managers at things like IPDET over the past decade, I have some idea as to what has been the problem.  It seems that in some cases – the implementation of RBM focused on technical aspects like having indicators and making them Specific, Measureable, Achievable, Realistic and Time driven (SMART). The problem is that this in some cases seems to have driven people to measure what is easy and controllable – not what is right and useful for management. Additionally – the ethos surrounding RBM has in some cases been strongly focused on accountability rather than learning and improvement. This seems to have driven out a spirit of continuous improvement in many cases and replaced it with fear of failure. Some would say that this is most natural to bureaucracies – and especially the ‘political’ or ‘socio-political’  bureaucracies of government organizations. They do not accept failure well – since to admit failure can cause voter support to drop – especially in competitive democratic systems – and therefore administrators  work on denying failures. To deny failure is in turn to deny learning and improvement.

So how do we break this seemingly intractable cycle documented by a succession of analysts and reviewers? It won’t be easy – but a recent UN study by its Joint Inspection Unit suggests a slightly different approach to RBM. They emphasize vision, thinking about how and why results occur and systems thinking – before discussing  SMART indicators, monitoring and evaluation.  I include an excerpt from the draft final report here.

PRINCIPLE

Description of Principles

Vision and Goals

“If you do not know where you are going, any road will take you there.”

 The long-term goals and the outcomes of the organization must drive all aspects of its work. Clarity in the vision and long-term goals allow for an organization to define its means of influencing change given its mandate and other international conventions. This also provides a framework for assessing the readiness and capabilities of the organization to achieve its long-terms goals. All aspects and levels of decision making need to consider the impact of decisions on the contribution of the organization to its long-term goals, or on its capacity to influence their achievement.

Causality and the results chain

 

Change occurs from a cause and effect relationship and not from a sequential ordering of activities.”

Change requires an understanding of causal linkages. Achieving change and impact requires making a hypothesis of how such change would occur. This requires establishing logical linkages (rather than sequential) within a well-defined theory of how the change will happen. The typical levels of the linear change process in management are defined in terms of input, output, outcome, and impact. Managing the chain of results involves establishing accountabilities as well as reciprocal obligations at each of these levels (vertical accountability).

Systems operation
and
strategic management

“All hypotheses of cause and effect occur with margins of error, subject to the influence of factors external to the intervention”

Development does not operate in a controlled environment but in an open system. Change occurs within a systems framework. Such systems framework is influenced positively or negatively by external factors arising from the environment or the actions of other key stakeholders that have the capacity to influence success. Thus identifying, monitoring and managing conditions for success as well as risks factors in which the results chain is expected to occur is critical for success. This also highlights a responsibility to seek to influence external factors to favour success.

Performance measurement

 

“If you cannot measure it, you cannot manage it”

Measurement involves quantitative and qualitative operational definition of phenomenon. This allows objectivity, transparency and mutual agreement among different range of stakeholders. It provides the basis for a contract agreement (accountability) about the performance that is expected (when indicators are defined in terms of quantity, quality and time dimensions, or in a SMART manner). The relevance and validity of performance indicators for contract agreement requires stakeholder engagement.

Monitoring
and Evaluation

“Hypotheses based on deductions of best practices and transfer of knowledge do not always have the effects anticipated.”

Given uncertainties in achieving results, managing for results requires robust evidence and lessons learned on results from monitoring and evaluation to ensure (i) progress towards results, (ii) the validity of the results chain and causal assumptions, (iii) contribution of the organization towards long-term goals. This evidence and lessons learned should inform adaptive management and decision making with the view of enhancing contribution to results.

Source: United Nations System-Wide Results-Based Management 
Analysis of Stage of Development and Outcomes Achieved in Managing 
For Achieving Results – Draft November 2017

 

For many of us, this signals a return to an emphasis on evaluative and deeper thinking about results. The inference is that the typical public enterprise results story is complicated and complex and requires an understanding of context,  systems and a chain of results – including some theory or explanation of why the chain of results is expected to occur. Note the principles also ask for the consideration of assumptions and factors affecting results (Principle 3 above).  If we take this kind of thinking the implications for all of us are potentially profound – from who should be truly ‘leading’ the development and refinement of results stories, to how performance profiles are constructed, what should be included in Departmental Performance Reports, to how  monitoring and measurement should be done. It should  also cause us to pause and reflect on what competence and capacity in RBM looks like. These principles suggest that it should in essence look for the development of key mental models.  It likely means a rethink of not just our epistemologies for RBM – but our fundamental ethos and logos.

If ethos and logos aren’t your thing – and you want to make one single change based on this work – then consider this: The ‘R’ in UN Principle 4  for RBM stands for relevant.  Even that small beginning change in orientation may help.

All the best to all of us attempting to promote evidence use in management, results based management and evaluative thinking in the new year!

2016 in Review – and we thought last year was in turmoil!

I guess I am an analyst at heart. I always find it interesting to look back at what one observed at a certain point in time and then to see what has happened since.  This year I am going to do that explicitly by revisiting observations we/I made about last year which were really sort of admonitions about the future.  I am then reporting on my perceived community progress since then.

1. New Government – Emphasis on Results

2015 Observation:  This year’s Canadian Fall election brought a new government with a clear and direct interest in ‘real change’, transparency and evidence-based policy and programs. This tone at the top cannot be understated – and the interest generated at all levels of public and related enterprise is palpable. A door has opened.

2016 Update:  There has been significant activity on this file since the new Canadian Government came into power. We may, however be facing a situation of rushing in some areas where we really need to think things through. The results and delivery methodology introduced to the Federal Government in February 2016 arguably needs a fair bit of adjustment to suit Canadian Federal Government circumstances. Using evidence to support decision-making means much more than developing more scorecards or simple minded delivery plans. After all – people have been trying to get this right for decades – world-wide. The good news is that Canada may have some of the answers in its results logic based approach – especially for the hard to measure areas. What we need to do is to make sure all relevant wisdom is brought to the table – even if this causes some minor delays in implementation.  (See Community of Practice discussion below.)

2.  Results Chain as a ‘Useful’ Basis for Measurement and Evaluation

2015 Observation:  The notion of using a results chain as a ‘useful’ basis for evaluation has emerged directly.  Most recently see Mayne, J. (2015). “Useful Theory of Change Models.” Canadian Journal of Program Evaluation 30(2): 119-142 and Mayne, J. and N. Johnson (2015). “Using Theories of Change in the Agriculture for Nutrition and Health CGIAR Research Program.” Evaluation 21(4): 407-428. These articles propose an approach we have been advocating for some time and create part of the conditions for real progress and maybe even transformation in our function.

2016 Update:  Experience continues to suggest that a results chain or results logic approach is a key to getting performance planning, measurement and evaluation ‘right’. At PPX we started a community of practice for science related areas (http://ppx.ca/en/exchange-communities-practice/) and it seems clear that the thoughtful, results logic-based approach developed across several Canadian provinces and some federal agencies over the past seven years (and de facto longer) is proving useful in helping to plan, analyze, report and manage science based policies, programs and initiatives.  There appears to be potential in a number of other areas as well.

3.  Program Delivery Arrangements Affect Results

2015 Observation:  In item 2 above, the notion that results chains apply to the set-up and delivery arrangements for a program every bit as much as the actual execution of the program has helped many of us to explain how and why certain programs continue to fail or persist in ‘bending’ in implementation. There are “Many a slip twixt cup and lip” as Shakespeare said. In many cases we can see that the delivery arrangements just don’t fit the value proposition (Theory of Change). For example, annual funding renewals and heightened renewal uncertainty in program areas requiring long term, certain and steady funding wreak havoc on management and often creates perverse effects. There are scores of other examples. For now suffice to say that there is some strong potential for cumulative learning here. If you have time – consider what we said about the Economic Action Plan and whether it actually created economic stimulus through its infrastructure funding in 2009.  http://www.pmn.net/wp-content/uploads/Why-Cash-for-Clunkers-Works.pdf.  Hopefully we can avoid similar folly ahead.

2016 Update:  Last year we used an old example regarding infrastructure spending (though it may be relevant again) on the Economic Action Plan to show how faulty implementation negated the benefit of a stimulus initiative. This year I have used a case example on Shared Services derived from a newspaper investigative analysis to suggest the same thing. See http://ottawacitizen.com/news/national/built-to-crash-the-ugly-sputtering-beginning-of-shared-services-and-how-politics-conspired-against-it. (Email me if you would like to see my presentation on this.)

Going forward – we will need to augment results and delivery plans with a recognition that the delivery governance, machinery and design have their own ‘theory’ to them (with their own critical success factors) and that we should include consideration of this in any analysis of performance. The new climate change agreement signed by federal and provincial actors is a case in point.  Given its flexibility, It is likely to represent an excellent learning opportunity to analyze governance, design and implementation as well as the relative merits of carbon taxation, vs. cap and trade, vs. regulatory or volunteer approaches. The fact is that groups such as the Auditor General of Ontario are already second-guessing the results logic in so many words. (See http://www.auditor.on.ca/en/content/annualreports/arreports/en16/v1_302en16.pdf. For many of us the review goes far beyond the bounds of what constitutes an audit.) The report does this without the benefit of what might have been a more fulsome examination of historical comparative evidence on the various elements of the implementation and theory of change in effect for Ontario’s cap and trade system.  In my view we need to do such analyses in a more systematic and structured fashion – as evaluations.

4. Archetypal Program Theories Inform Analysis

2015 Observation:  Following from the above – the notion that there can be archetypes of results logic or program theories for different types of programs is starting to ‘take’. We have seen that many funding programs operate on similar results logic and principles, as do many regulatory and information/advisory programs. With that in mid it stands to reason that there is much to be gained in learning what works (to what extent) with whom under what conditions and why – by starting from a synthesis of what we know about the key contextual factors that have allowed programs or policies showing similar patterns to work. This also allows us to develop appropriate monitoring, measurement and evaluation schemes, as well as to set appropriate benchmarks.

2016 Update:  Stay tuned re: the work of our science community of practice, as well as other work we have been doing vis a vis regulatory, corporate, policy and other types of carrot, stick, sermon or other policy instrument types. We are starting to see an extensive amount of common ground across groups using similar archetypes. The results may be transformational in terms of being able to cost-effectively review performance.

5.  Monitoring and Evaluation – a Collective Learning Approach

2015 Observation:  Finally – points 1-4, along with our immensely advanced recent ability to communicate quickly and easily with one another obviate the emerging view that monitoring and evaluation are part of a learning approach and that this works best as a team sport. Specialists and generalists, line managers and corporate reviewers, delivery agents and users/clients work best when they work together to understand the need, the relevance of a given policy or program or set of policies and programs to a given situation and context, how things are intended to work (and why we think they will), how they actually work in specific sets of conditions, with/for whom, why and what can be done about it.

2016 Update:  The team sport aspect of review work has begun to take hold in more areas over the past year. In our practice it is now typically standard to hold workshops on results logic among key stakeholders – at the beginning, during and sometimes after major PM+E projects and exercises. The question of “What works with/for whom (to what extent) in what conditions and why?” is emerging as a key to engaging diverse groups in a collective learning journey which can complement or sometimes even replace the adversarial ‘accountability’ modus operandi typical of conventional audit and many cost-cutting review approaches. (The realist approach which focusses on this question was probably one of the two most prominent approaches showcased at the European Evaluation Society conference in October.)  At the end of the day – our experience has been that we can improve negative findings acceptance if we have engaged key groups on a collective learning journey (and we have substantively reviewed evidence vs. the results logic of delivery arrangements and the enabling environment as well as outcomes.)

So we have had some evolutions on the themes expressed last year – however for me they all still ring true – some with more urgency than others. We look forward to working with many of you to help advance the file in these exciting and yet extremely challenging times in Canada and in our world.

 

 

How a Configurational Contextual Model ‘Trumped’ the Conventional Forecasters: Four I’s Proved to be Powerful Predictors in the 2016 Presidential Election

by Steve Montague

Groups like Pollyvote and Real Clear have been doing a fair bit of ex poste diagnosis of which models and approaches did worse than others in failing to predict the 2016 Donald Trump US Presidential election victory.  This article will not rehash those arguments other than to note that all sources have had to admit that what they have called a ‘configurational’ or ‘threshold’ model developed by a history professor name Lichtman outperformed the more elegant statistical and econometric models on offer.[1]

This has to be seen as a bit of a coup for analysts and evaluators who believe that causation can be thought of as the product of contextual factors playing on any mechanism or set of mechanisms at a given period of time.

The Lichtman Keys can be represented as statements that favor victory for the incumbent party. According to the model, when five or fewer statements are false, the incumbent party is predicted to win the popular vote; when six or more are false, the challenging party is predicted to win the popular vote.

  1. Party Mandate: After the midterm elections, the incumbent party holds more seats in the U.S. House of Representatives than after the previous midterm elections.
  2. Contest: There is no serious contest for the incumbent party nomination.
  3. Incumbency: The incumbent party candidate is the sitting president.
  4. Third party: There is no significant third party or independent campaign.
  5. Short term economy: The economy is not in recession during the election campaign.
  6. Long term economy: Real per capita economic growth during the term equals or exceeds mean growth during the previous two terms.
  7. Policy change: The incumbent administration effects major changes in national policy.
  8. Social unrest: There is no sustained social unrest during the term.
  9. Scandal: The incumbent administration is untainted by major scandal.
  10. Foreign/military failure: The incumbent administration suffers no major failure in foreign or military affairs.
  11. Foreign/military success: The incumbent administration achieves a major success in foreign or military affairs.
  12. Incumbent charisma: The incumbent party candidate is charismatic or a national hero.
  13. Challenger charisma: The challenging party candidate is not charismatic or a national hero.

For each of these Lichtman constructs what amounts to a ‘truth table’ (see Qualitative Comparative Analysis for an explanation of Truth Tables) which is essentially a set of either true or false (1,0) ratings – along with a certainty level – for each of the  13 factors. The evolution of these ratings over time leading up to the November 8th 2016 US election is a matter of public record. Lichtman now famously predicted a Trump victory as early as September.

What may be less obvious is the way that the Lichtman factors seem to cover what Pawson has called the four Is of context. These four Is include Infrastructural considerations, Institutional considerations Inter-relational considerations and Individual considerations[2]. Thinking about these categorizations is useful here because there has been a tendency to simplify Lichtman’s findings into concluding that the fate of an incumbent administration is completely ‘up to them’. [3] When you look at the four Is categories – you can see that the Lichtman ‘keys’ each fit in to at least one of the four I’s. See below:

1: Party Mandate Infrastructural
2: Contested Nomination Institutional
3: Incumbent Status Infrastructural
4: Third Party Challenge Infrastructural
5: Short-term Economy Infrastructural
6: Long-term Economy Infrastructural
7: National Policy Achievement/Shift Institutional
8: Social Unrest Infrastructural
9: Scandal Inter-relational / Individual
10: Foreign Policy Defeats Inter-relational
11: Foreign Policy Success Inter-relational
12: Incumbent Charisma Individual
13: Challenger Charisma Individual

 

These ratings are mine and certainly some could be disputed or rendered into more than one category. The important thing though – is that the range of contextual factors goes from the broad socio-economic and political – products of history and broad circumstances, through to decisions largely made by key institutional groups (like national policy shifts), through to inter-relational factors like foreign affairs policy wins or defeats through to individual characteristics – like the charisma of the incumbent and challenger leaders. So in fact, many of the factors influencing the selection of president are out of the hands of either the incumbent or the challenger.

When it comes to strategy then – the idea is to focus on things you can most easily influence.  If you are a challenger the implication is that one should focus on the factors which could be most easily interpreted as associated with a person and their relationships. Note that some of the main messages of the Trump campaign focused on factor 9. scandal (Clinton’s use of a non-government sanctioned email account) factor 10. foreign policy (in terms of trade deals being ‘horrible’, in addition to being soft on terror etc.) and factor 12. incumbent charisma/character (characterized by Trump as ‘Crooked Hillary’ and the accusation that she was somehow ‘low energy’).

So the Lichtman model prediction success should suggest two important things for analysts, researchers and evaluators:

  1. Context is critical to understanding outcomes; and,
  2. A configurational ‘cause-effect’ model (possibly ‘sorted’ by the four Is as shown here or by some other kind of contextual leveling which range from the broad to the specific) can help to explain (and predict) results.

In our recent work we have been working on what might be called check lists for factors that affect success in terms of policy and program implementation. For a related article see https://www.pmn.net/wp-content/uploads/Checklist-for-Context-and-Policy-Instruments.pdf. It appears that this latest development validates the further pursuit of such configurational approaches in various fields of application.  

Steve Montague (steve.montague@pmn.net) is a partner with PMN, a Fellow of the Canadian Evaluation Society and an adjunct professor at Carleton University in Ottawa, Canada.

[1]Lichtman, Allan J. (2008). The Keys to the White House: A Surefire Guide to Predicting the Next President (2008 ed.) New York, NY: Rowman and Littlefield Publishers The approach has predicted every Presidential election since 1984.

[2] Pawson, R. and Tilley, N. (1997) Realistic Evaluation Sage https://us.sagepub.com/en-us/nam/realistic-evaluation/book205276.  Pawson outlines a “four Is” framework as follows: Infrastructure (which refers to the wider social, economic, and cultural setting of a program/intervention); Institutional setting (the characteristics of the institution involved); Interpersonal relations (nature and history of key relationships); and, Individuals (characteristics and capacities of stakeholders).

[3] Lipman himself says this as follows “The principal historical lesson to be drawn from the Keys is that the fate of an incumbent administration rests largely in its own hands; there is little that the challenging party can do to affect the outcome of an election.” Lichtman, Allan J. The Keys to the Whitehouse op cite  

Impact Pathways for Science Initiatives Released

A well received report describing science impact pathways was recently noted by an NRC official at the November 7th Science-based Organization public forum.  The report Study of Large Scale Research Infrastructure Impact Assessment was co-authored by PMN partner Steve Montague and associate Gretchen Jordan.  The full report has just been released for public access.  Email info@pmn.net to receive a copy of the full report .

Presentation by Steve Montague and Bridgette Dillon at the 12th European Evaluation Society Biennial Conference 2016

The presentation by Steve Montague and Bridgette Dillon on Developing Useful Programme Theories for Complex Interventions formed a logical flow and reinforced each other so as to provide the perspectives of an experienced commissioner of evaluations and that of an experienced practitioner leading evaluation study teams. Audience members were encouraged to interactively participate with the presenters at key periods throughout the session. For the full presentation click here:  Developing Useful Programme Theories for Complex Interventions.

 

Infrastructure Spending Stimulus: Good in Economic Theory – A Fail in Political Economy Practice?

For every complex problem, there is a solution that is simple, neat, and wrong.
– HL Mencken

‘Canada’s national newspaper’ the Globe and Mail on January 16th 2016 (Folio infrastructure A8) quoted a ‘simple’ Finance Canada table showing that  for every dollar spent on infrastructure there is an increases in economic growth by  1.5 dollars. The article quotes the Department as concluding that  “infrastructure spending is the most effective form of stimulus compared to the other options,  including more generous Employment Insurance benefits or tax cuts.”   Private sector economists apparently agree with the idea that infrastructure spending stimulus would be significantly positive – boosting economic growth by an estimated half a percentage point.

The problem is that we have been here before, and the  evidence suggests that there is a fly in the ointment. In late 2010 Tim Kiladze of Canada’s national newspaper noted that Economic Action Plan put in place by the government of Canada in 2009 after the 2008 recession for mostly so-called ‘shovel ready’ infrastructure  funding was spent too late to be effective. In fact, it was suggested by some analysts he quotes in 2010 – that some infrastructure funding in 2009 was actually delayed while provinces and regions waited to see if their project qualified for federal assistance. See http://tinyurl.com/Infrbust. So the Economic Action Plan Infrastructure spending of 2009-10 appears to have potentially de-stimulated the economy in some areas during the Canadian Economy’s  time of greatest stimulus need! (The systems thinkers call this a classic ‘fix that fails’.). The real problem from my perspective  is that some of us predicted this. (See  http://tinyurl.com/ClunkvsInfr.)

How can we use evaluation expertise to help this time? The first step will be to recognize the problem here. I submit that the implementation design did not fit the need and the theory of change in 2008-09. In simple terms – there were too many ‘partners’ and intermediaries who needed to align to spend the cash. Multiple levels of government were involved, diverse communities and various civil society stakeholders and the media were weighing in on almost each and every investment. By contrast – the US and then Canadian equivalent of cash for clunkers used straight forward authorities (i.e. The programs featured direct federal government funding to intermediaries.) and motivated intermediaries who had a track record of speedy transaction promotion  (i.e. car salesmen) to push federal stimulus money out the door at record levels.  See http://tinyurl.com/ClunkvsInfr for a more detailed discussion and some admitted gloating about being right in my 2009 prediction that the Economic Action Plan infrastructure stimulus would not work.

Hopefully this time we can get our Canadian Federal Government to consider the linkage of the theory of change (i.e. the need to stimulate the economy through some transfers) with the implementation design. The evaluation and research community then needs to step up and to synthesize the conditions and contexts that will enable stimulus programs, such as those being contemplated right now, to succeed in the political economy of Canada. We need to help fill in the ‘gap’ that frequently exists between the policy theory and the implementation reality. In some of the latest thinking on the use of theories of change from Mayne (https://evaluationcanada.ca/useful-theory-change-models?f=n – see especially pages 120-122)  the point is made that program theory should be used for much more than the design of evaluation or measurement regimes. When it includes concepts like who one needs to reach and what early and intermediate changes need to occur – it  can be used to directly inform policy and programming. Isn’t it time  to get in front of the big decisions – so evaluators and evaluative thinking can help inform policies , programs and strategic decisions? For me that would be way more fun than toiling away on our studies in obscurity and/or continuously saying “I told you so” after the fact.

2015 in Review – Turmoil in the World – Transition for Planning, Monitoring and Evaluation?

2015 was one of some turmoil in world terms. It was also the international year of evaluation. For many of us in evaluation and performance monitoring practice in Canada however, there were actually some encouraging signs which have emerged this year. Here are five inter-related phenomena which bode well for performance planning, monitoring, evaluation and management going forward:

1)     This year’s Canadian Fall election brought a new government with a clear and direct interest in ‘real change’, transparency and evidence-based policy and programs. This tone at the top cannot be understated – and the interest generated at all levels of public and related enterprise is palpable. A door has opened.

2)     The notion of using a results chain as a ‘useful’ basis for evaluation has emerged directly.  Most recently see Mayne, J. (2015). “Useful Theory of Change Models.” Canadian Journal of Program Evaluation 30(2): 119-142 and Mayne, J. and N. Johnson (2015). “Using Theories of Change in the Agriculture for Nutrition and Health CGIAR Research Program.” Evaluation 21(4): 407-428. These articles propose an approach we have been advocating for some time and create part of the conditions for real progress and maybe even transformation in our function.

3)     In item 2 above, the notion that results chains apply to the set-up and delivery arrangements for a program every bit as much as the actual execution of the program has helped many of us to explain how and why certain programs continue to fail or persist in ‘bending’ in implementation. There are “Many a slip twixt cup and lip” as Shakespeare said. In many cases we can see that the delivery arrangements just don’t fit the value proposition (Theory of Change). For example, annual funding renewals and heightened renewal uncertainty in program areas requiring long term, certain and steady funding wreak havoc on management and often creates perverse effects. There are scores of other examples. For now suffice to say that there is some strong potential for cumulative learning here. If you have time – consider what we said about the Economic Action Plan and whether it actually created economic stimulus through its infrastructure funding in 2009.  https://www.pmn.net/wp-content/uploads/Why-Cash-for-Clunkers-Works.pdf Hopefully we can avoid similar folly ahead.

4)     Following from the above – the notion that there can be archetypes of results logic or program theories for different types of programs is starting to ‘take’. We have seen that many funding programs operate on similar results logic and principles, as do many regulatory and information / advisory programs. With that in mid it stands to reason that there is much to be gained in learning what works (to what extent) with whom under what conditions and why – by starting from a synthesis of what we know about the key contextual factors that have allowed programs or policies showing similar patterns to work. This also allows us to develop appropriate monitoring, measurement and evaluation schemes, as well as to set appropriate benchmarks.

5)     Finally – points 1-4, along with our immensely advanced recent ability to communicate quickly and easily with one another  obviate the emerging view that monitoring and evaluation are part of a learning approach and that this works best as a team sport. Specialists and generalists, line managers and corporate reviewers, delivery agents and users / clients work best when they work together to understand the need, the relevance of a given policy or program or set of policies and programs to a given situation and context, how things are intended to work (and why we think they will), how they actually work in specific sets of conditions, with / for whom, why and what can be done about it.

Results Chain Approach Validated by Scottish Paper on Research Uptake and Impact

The results chain approach and content found in most of the articles published on the PMN web site – dating back decades (see https://www.pmn.net/wp-content/uploads/PMN-Results.pdf for a retrospective results chain illustration of the acceptance of reach and results chains – as contributed to by PMN work) continue to gain acceptance.  Sarah Morton of the University of Edinburgh shows what she has called  a ‘research grounded contribution framework’,  using a model that S. Montague and PMN have been using for some time now (see link above) – as evidenced by the references in the PMN library.

For a link to a free PDF of the Morton study see:  Morton, S. (2015). “Progressing research impact assessment: A ‘contributions’ approach.” Research Evaluation available at http://rev.oxfordjournals.org/content/early/2015/08/14/reseval.rvv016.full.pdf+html – check out Figure 1 and then Tables 1-3. They should look familiar!