Reach, Engagement, Trust and Influence Networks (RETaIN) as core results framing for a future world: Valuing the relational over the transactional to improve system resilience and sustained impact
S. Montague April 14, 2025
Events in all domains (health, environment, economics, culture & society) over the past decades have shown that framing results in terms of transactions – whether they are outputs or outcomes – has unduly favoured short-term and short-sighted compliance, efficiency, productivity and growth. This has produced unstable, fragile systems – blinded to its effects on specific groups and people and therefore excluding consideration of rights holders, playing to the interests of the powerful key actors as it tends to focus on aggregates and averages and actually creates large disparities in impact. Not only has this skewed power dynamics – towards colonizers and oppressors, this has proven susceptible to emergent shocks and changes and therefore has reduced resiliency in systems and society.
On the other hand, results framing that favours and essentially features true relationship and trust network building can build in direct consideration of the reach, engagement and reciprocal trust network building of policies, programs and interventions. This produces longer term, sustained impacts. The problem is that tradition, inherent interests in institutional structures and practical feasibility issues (it has seemed easier to count transactions than relationship ‘events’) have mitigated against the inclusion of reach, engagement and trust building(maintaining) in planning, monitoring/measurement framing, assessment and reporting.
Authors like Montague and Porteous (2013) and Mayne (2015) have picked up on their work as far back as the 90s[1] and thought leaders like (Chen 2005) and others to suggest that reach and engagement need to take a more prominent spot in the laying out of results logic and evaluation planning. Complexity authors (Finegood 2021) and others have noted the need to emphasize the relational over the transactional in measurement. The corollary to this is that there also needs to be measures related to the concepts of reach and engagement – not just rhetoric. Valid measurement in this area has proved problematic in times past. Often transactional measures – like counts of agreements – have stood in for relational ideas such as constructive relationships and trust networks. This can be deceiving. A simple way to look at this would be to consider whether the number of Memoranda of Understanding, agreements or contracts signed in a given area have really represented the development of trusting relationships. Examples abound in environmental, health, indigenous reconciliation, economics and trade relations where agreements have not in the end proven to be reliable trust bonds – due to coercion, misinterpretation, disingenuity on the part of some participants or simple lack of commitment. Furthermore, simple counts of actions taken or transfers of resources and information would also seem to require further examination before pronouncing on the transactions as representative of positive influence
In summary, what gets measured not only gets done – it gets valued. So if you measure numbers of agreements, attendees, registrants, dollars transferred, downloads or information products ‘shared’ – these become the goals. When volume becomes the goal – volume is what you get – often at the expense of quality or value. Information products shared or events conducted can be gamed by the producing agent or agents. Number of information products sent or downloads can be parsed to increase the volume. Resource transfers can be made because budgeting protocols require spending within certain time periods – or else lapsed funds will be lost. In some cases, volumes can increase due to overlaps, duplications and fragmentations which are inadvertent. This is the case in several government program areas.
In order to address and to at minimum complement transactional measurement – managers and staff need to
- Recognize reach and engagement as a fundamental part of results and impact – and the building of a network of trust and influence as legitimate outcomes in their own right – suggesting resilient and robust achievements of impacts as well as the ability to rely on these results to address emerging challenges as well as current ones. In a recent case – a trust network of farmers built to help in the adoption of beneficial management practices to promote GHG reductions went on to directly help each other in drought conditions. In another case – a network of women at risk of cardio vascular disease went beyond information sharing to empower themselves as advocates in the health system. The trust network itself is therefore possibly more valuable than the immediate transactions they support.
- Adopt a measurement approach which builds on this idea and shows progress over time
In response to this second point – a pneumonic may be adopted as follows:
| Component | Description | Measurement Indicators |
| Reach | The entities and groups in direct contact with agency processes & outputs | Counts of participants (vs expected or estimated total potential) and rating of level of reach (1) |
| Engagement | The level of constructive engagement as per expected role* that is attained by entities reached | Qualitative assessments of participation as determined by participant perception / self- assessment (eg via survey) (2) , Quantitative coded assessment of observed participation characteristics re: specific program/project communications & by key actors in terms of traits such as access, timeliness, completeness, accuracy (3) |
Trust and | The level of trust by entities reached | Qualitative assessments of indicators of trust (2) Qualitative content analysis of tone, tenor & content of correspondence & communications re: information requests & interactions throughout phases of process interactions (4) |
| Influence | The level of influence demonstrated via expected actions by entities reached | Self-assessed ratings of the usefulness and use of information provided.(5) Content assessment of observed level of required actions taken as a result of decisions.(6) |
| Network | The level of connected information sharing, support and actions in entities in the system | Tracking through observation and/or self-assessment of practices in information sharing and application of mitigation or remediation measures in entities beyond those covered in decisions and actions (7) |
In fact RETaIN is more than a pneumonic. It acts as a sequentially applied framework for an approach that puts a premium on measuring relationships and the building trust as part of impact. The next section suggests some key measures for the approach.
Measures – indicators to support RETaIN.
The following measures – indicators should be considered to support the RETaIN approach. These should be applied either individually or cumulatively – depending on the situation and circumstances of the planning, monitoring, review or analysis.
| Indicator-Measure | Description & Analysis | Use |
| 1) Participant Counts / Tracking | Counts of the # of participants at meetings, in online or in-person sessions and/or in terms of correspondence based on requests or unsolicited participation. The analysis should involve comparing counts to expectations re: key groups or individuals participating. This could include the observed category or level of the participant within a represented entity. Note this approach could be conducted precisely by tracking log-ins and registrations formally or by observation or by participant self-assessment. | This measure is key to all phases since in itself it shows the extent that entities which were expected to participate -actually participated. This also – if done by noting the actual entities – can be the start of tracking the nature and quality of the engagement. This in turn is linked to demonstrated trust (see 4)) and actions (eg adoptions of policies, protocols, innovations, processes, practices technologies etc) later in the process. (see 5) and 6)) |
| 2) Participant self-assessment of their reaction, quality and satisfaction with engagement / participation | The assessment of agency program and initiative processes by participants This is often tracked by some programs through project reporting and may also be tracked via survey and/or by coding correspondence, complaints and social media responses where appropriate | This measure will be useful for significant assessments and can be used to complement more quantitative measures like response timeliness and completeness (see 3) and counts of activities and outputs. Given the effort involved in collection- and difficulties with survey response rates – this may be considered only selectively. |
| 3) Rated access, timeliness, completeness, accuracy | This indicator would use recorded access, timeliness, completeness and accuracy assessments as measures of engagement since they can be seen to be measures of desirable engagement traits for entities being asked to provide information to project process phases, applications and service requests. | This measure will be useful for most if not all programs with projects, applications and service functions requiring information inputs from users. It serves as a valid indication of engagement because it represents the actions and exchanges of information between and among entities needed to achieve expected results. |
| 4) Trust analysis | This measure would build up a trust rating rubric – based on the observed results in indicators 2) and 3) above as well as some questions in 5) below and established as an amalgam, index or cumulative score over time. (See various trust hierarchy models and indicators available) Note that the rating rubric would draw from perceptions in 2), observed actions (like timeliness and completeness of information support in 3) and from assessed information use scores related to ‘favouring over other sources’ and other related questions in 5) – as well as potentially generative AI analysis of trust (or lack of trust) drawn from the nature and content of communications and correspondence with key actors. | This metric can be considered very important – perhaps of the utmost importance as a baseline for agency functions of all kinds and so it should be developed carefully and with due reflection on the validity of the rubric. A pilot or set of development and applications trials should be considered on a program or service basis based on their context. |
| 5) Uptake and Use of information (self-assessed and/or observed) | This metric is described in Knowledge Uptake and Use descriptions as shown in The Knowledge Uptake and Use Tool – Performance and Planning Exchange (ppx.ca) and the use of this tool in related evaluations and reviews . The applications here would be for specific information sharing events and products– as part of outreach and engagement efforts and/or other significant communications | This metric can be considered for processes when the intent is to reach and influence those who have been associated with proponents of a proposed project or imitative (eg the adoption of significant products, processes, policies, technologies, practices etc) and those surrounding the application. It links directly to 6) below. |
| 6) observed level of adoption of new products, technologies, processes, policies, practices, etc. | This metric can be considered part of the follow-up on projects duly carried out by an agency and its co-delivery partners towards the end of project interventions and following their conclusion. Note that AI tools could be trained to look for evidence that actions have been reported before any formal survey or testing or follow-up inquiry is undertaken. So this measure could be undertaken without requiring major direct follow-up actions using human resources or reviewers. | This metric can be considered a valid and directly relevant measure of the influence of an agency since it looks to see if expected actions have been taken by the proponents, recipients and others in the sphere of influence. The resource effort to do this consistently may be prohibitive in some cases – so content analysis of reports and correspondence and other media (potentially assisted by AI applications) could be used to assess level of progress in adoptions and change. |
| 7) Network change analysis & scale-up | This measure is really a periodic assessment based on observed and reported adoptions of knowledge (eg citations) practices and protocols, policies etc over time related to information sharing and capacity leading to actions (reported changes) in terms of practice changes, adoptions and other innovations – beyond those directly supported – that can be linked to agency support work. | This kind of measure should be considered only after a project (significant part of a program) has run its course. This may be considered the kind of measurement that would be done via case study on a selective basis and /or as part of an evaluation. For scale-up definitions see https://www.idrc.ca/en/book/scaling-impact-innovation-public-good |
In summary The RETaIN framework—is proposed as a response to the inadequacies of transactional-based measurement systems in policy, program, and intervention planning, measurement and evaluation. Traditional approaches that focus on transactional measures of outputs and outcomes (like attendance, downloads, or agreements – or even counts of high level volumes like economic dollar indicators) have resulted in fragile and inequitable systems. These methods often ignore deeper systemic issues, particularly how power dynamics, exclusions, and aggregations mask the impacts on marginalized groups. By prioritizing volume over quality, such transactional metrics have led to skewed incentives, reduced resilience, and susceptibility to shocks.
In contrast, RETaIN proposes a shift toward valuing relationships and trust as central to sustained impact and long-term resilience. This means embedding considerations of reach (who is included), engagement (how meaningfully they participate), trust (mutual confidence), and influence (the actual impact on behavior or decisions) into planning, monitoring, and evaluation processes. Although this relational approach faces institutional resistance due to traditional structures and practical measurement challenges, we as proponents argue that what is measured shapes what is valued. Building and tracking trust networks, such as among farmers adopting environmentally friendly practices, or persons at disease risk working to empower themselves in the system, has shown to produce benefits beyond the initial policy goals, like mutual aid during crises.
To operationalize RETaIN, the framework includes specific indicators to assess each dimension. These include participant counts to assess reach, self-assessments and behavioral metrics for engagement, and analytic tools (potentially using AI) to evaluate trust and influence. The goal is to establish valid, meaningful metrics that reflect relationship quality rather than volume. Ultimately, RETaIN encourages organizations to rethink success metrics—not just as a way to measure impact but as a way to foster resilient, equitable systems that are adaptable to ongoing and emerging challenges.
++++++++++++++++++++++++++++++++++
[1] Both Montague (The Three Rs of Performance) and Porteous (Ottawa Public Health guidance) included reach in their work in the mid 1990s. See for example Build reach into your logic model | Better Evaluation
++++++++++++++++++++++++++++++++++
Select References:
Chen H., Practical Program Evaluation: Assessing and Improving Planning, Implementation and Effectiveness Sage (2005) https://methods.sagepub.com/book/mono/practical-program-evaluation/toc p27-28
Finegood D., Transactional to Relational – Complex Systems Frameworks Collection – Simon Fraser University
Mayne, J., Useful Theory of Change Models January 2015Canadian Journal of Program Evaluation 30(2) DOI: 10.3138/cjpe.30.2.142 https://www.researchgate.net/publication/279533296_Useful_Theory_of_Change_Models
Montague S., & Porteous N., The case for including reach as a key element of program theory Vol 36 Issue 1 February 2013 Pages 177-183
https://www.sciencedirect.com/science/article/abs/pii/S0149718912000249?via%3Dihub
Montague S., Build Reach Into Your Logic Model (1998) Build reach into your logic model | Better Evaluation
Skinner K., The Knowledge Uptake and Use Tool – Performance and Planning Exchange (ppx.ca)
[1] Both Montague (The Three Rs of Performance) and Porteous (Ottawa Public Health guidance) included reach in their work in the mid 1990s. See for example Build reach into your logic model | Better Evaluation