A Time For Key Principles: What the UN has learned

S. Montague, December 2017

This year in the circles we have been working in there has been a strong emphasis on implementing the Canadian Results Policy of 2016. The policy has a few new tracking (tagging) of data elements to it and some structural requirements that are slightly new – but otherwise it can be seen as similar to past managing for results and results based management initiatives instigated by the Government of Canada and others over the past two decades (and in some senses much longer). The Policy essentially focusses on the idea that public administration – including the political levels – should focus on the  results of policies and programs – rather than input levels, activities and outputs. This is of course easier said than done anywhere – let alone in large bureaucracies.

Some of us have been recently involved in both consulting assignments and some capacity building efforts over the past year which have provided us with a firsthand perspective on  the issues in terms of the system’s ability to ‘gear-up’ to manage for results. Rather than list the issues, the gaps and the factors influencing them which exist at many levels in the system, it might be useful to instead look for inspiration from the results of other related efforts to review and assess the state of play in terms of a public administration  ‘results’ movement.

The United Nations  took on the idea of results based management (RBM) around the turn of the twenty-first century. Canadians like Perrin and Mayne clearly influenced some of the early thinking and their ideas helped shape some of the early guidance.  There have been a number of reviews conducted of RBM  over the past decade and a half – all of which have been somewhat critical of the key elements of the  movement – the idea that RBM goes beyond technical aspects of defining performance, measuring / monitoring / evaluating and reporting on results. The idea is that management would use results and performance information to manage. Unfortunately a recent survey of UN managers suggests that the vast majority did not see evidence based decision making more than ‘occasionally’. Even worse – only a small fraction believed performance analysis was done ‘honestly’.

Having done assignments for two UN organizations over the past three years and having taught a number of UN (and other) staff and managers at things like IPDET over the past decade, I have some idea as to what has been the problem.  It seems that in some cases – the implementation of RBM focused on technical aspects like having indicators and making them Specific, Measureable, Achievable, Realistic and Time driven (SMART). The problem is that this in some cases seems to have driven people to measure what is easy and controllable – not what is right and useful for management. Additionally – the ethos surrounding RBM has in some cases been strongly focused on accountability rather than learning and improvement. This seems to have driven out a spirit of continuous improvement in many cases and replaced it with fear of failure. Some would say that this is most natural to bureaucracies – and especially the ‘political’ or ‘socio-political’  bureaucracies of government organizations. They do not accept failure well – since to admit failure can cause voter support to drop – especially in competitive democratic systems – and therefore administrators  work on denying failures. To deny failure is in turn to deny learning and improvement.

So how do we break this seemingly intractable cycle documented by a succession of analysts and reviewers? It won’t be easy – but a recent UN study by its Joint Inspection Unit suggests a slightly different approach to RBM. They emphasize vision, thinking about how and why results occur and systems thinking – before discussing  SMART indicators, monitoring and evaluation.  I include an excerpt from the draft final report here.


Description of Principles

Vision and Goals

“If you do not know where you are going, any road will take you there.”

 The long-term goals and the outcomes of the organization must drive all aspects of its work. Clarity in the vision and long-term goals allow for an organization to define its means of influencing change given its mandate and other international conventions. This also provides a framework for assessing the readiness and capabilities of the organization to achieve its long-terms goals. All aspects and levels of decision making need to consider the impact of decisions on the contribution of the organization to its long-term goals, or on its capacity to influence their achievement.

Causality and the results chain


Change occurs from a cause and effect relationship and not from a sequential ordering of activities.”

Change requires an understanding of causal linkages. Achieving change and impact requires making a hypothesis of how such change would occur. This requires establishing logical linkages (rather than sequential) within a well-defined theory of how the change will happen. The typical levels of the linear change process in management are defined in terms of input, output, outcome, and impact. Managing the chain of results involves establishing accountabilities as well as reciprocal obligations at each of these levels (vertical accountability).

Systems operation
strategic management

“All hypotheses of cause and effect occur with margins of error, subject to the influence of factors external to the intervention”

Development does not operate in a controlled environment but in an open system. Change occurs within a systems framework. Such systems framework is influenced positively or negatively by external factors arising from the environment or the actions of other key stakeholders that have the capacity to influence success. Thus identifying, monitoring and managing conditions for success as well as risks factors in which the results chain is expected to occur is critical for success. This also highlights a responsibility to seek to influence external factors to favour success.

Performance measurement


“If you cannot measure it, you cannot manage it”

Measurement involves quantitative and qualitative operational definition of phenomenon. This allows objectivity, transparency and mutual agreement among different range of stakeholders. It provides the basis for a contract agreement (accountability) about the performance that is expected (when indicators are defined in terms of quantity, quality and time dimensions, or in a SMART manner). The relevance and validity of performance indicators for contract agreement requires stakeholder engagement.

and Evaluation

“Hypotheses based on deductions of best practices and transfer of knowledge do not always have the effects anticipated.”

Given uncertainties in achieving results, managing for results requires robust evidence and lessons learned on results from monitoring and evaluation to ensure (i) progress towards results, (ii) the validity of the results chain and causal assumptions, (iii) contribution of the organization towards long-term goals. This evidence and lessons learned should inform adaptive management and decision making with the view of enhancing contribution to results.

Source: United Nations System-Wide Results-Based Management 
Analysis of Stage of Development and Outcomes Achieved in Managing 
For Achieving Results – Draft November 2017


For many of us, this signals a return to an emphasis on evaluative and deeper thinking about results. The inference is that the typical public enterprise results story is complicated and complex and requires an understanding of context,  systems and a chain of results – including some theory or explanation of why the chain of results is expected to occur. Note the principles also ask for the consideration of assumptions and factors affecting results (Principle 3 above).  If we take this kind of thinking the implications for all of us are potentially profound – from who should be truly ‘leading’ the development and refinement of results stories, to how performance profiles are constructed, what should be included in Departmental Performance Reports, to how  monitoring and measurement should be done. It should  also cause us to pause and reflect on what competence and capacity in RBM looks like. These principles suggest that it should in essence look for the development of key mental models.  It likely means a rethink of not just our epistemologies for RBM – but our fundamental ethos and logos.

If ethos and logos aren’t your thing – and you want to make one single change based on this work – then consider this: The ‘R’ in UN Principle 4  for RBM stands for relevant.  Even that small beginning change in orientation may help.

All the best to all of us attempting to promote evidence use in management, results based management and evaluative thinking in the new year!