Towards better monitoring and evaluation practice in performance measurement


Towards better monitoring and evaluation practice in performance measurement


25th June 2018

Towards better monitoring and evaluation practice in performance measurement

Performance measurement is at the heart of all efforts by social investors and grantmakers to make social/community investment and development more effective. Philanthropists, donors, social and impact investors, governments and development agencies alike need more credible and timely evidence about their performance.

While acknowledging that financial resources alone cannot drive sustainable development, the industry must strive to better understand how different types of development and financial models as well as performance management tools and activities fit together to improve outcomes for the communities they serve. The following are guidelines based on the practical experience with monitoring and evaluation Next Generation has gained in recent years:

A. Use monitoring and evaluation data

The value of evaluation is completely underestimated, and in our opinion social investors and programme managers should look to evaluation for credible, independent analysis and recommendations to inform their decision-making processes. Evaluation should help an organisation understand if it is working effectively and efficiently towards its goals and identify what results it is achieving, and how.

Evaluation can provide much-needed information for social investors and their development partners to credibly say that they know how to achieve success and manage risks. However, investment and development decisions are too often driven by routine about what might work (we have always done it this away), assumptions (this worked in the sector, so it will work in the rural areas) or hunches (looks like more school buildings would improve education outcomes), rather than by proven strategies for solving a problem in a given context. With a stronger evidence base, social investors can stand on more solid ground to defend their development programmes to sceptics.

Many development areas are still riddled with knowledge gaps and in many cases it is not clear which approach will work best. Careful experimentation is required to figure out which approaches are working and why, and which contextual factors are critical to achieving desired outcomes. Evaluation is a key link in this learning chain.

A learning culture can create systems that encourage staff and management to flag, investigate and learn from success or failure. If programme staff and management are not willing to examine assumptions or change course, evaluation findings and recommendations will go unheeded and mistakes will be repeated.

Most funders, social investors and grantmakers have reported that the lack of resources (especially human resources) is a key barrier that affects the quality of evaluation reports.

B. Ensure that evaluation is properly resourced

To produce and use credible evaluation evidence, investors and grantmakers need adequate human and financial resources. This includes creating dedicated evaluation capacities to produce and use evaluation, having evaluation competence in operational and management units, and funding for commissioning studies, data collection, knowledge management and disseminating lessons learned. Human resources are just as necessary as funding. Evaluation staff must be professional and suitably qualified to use appropriate methods to examine the activities and development outcomes/results of interventions, and have appropriate skills to manage the evaluation process efficiently.

 If a grantmaker is serious about doing top quality social investment and development, it must invest accordingly in evaluation practices. New demands on evaluation divisions – especially more rigorous impact evaluations, working more collaboratively with partners, and synthesising findings to report on aggregate development results – have major implications for resource allocation.

C. Integrate and incorporate evaluation in programme design

Evaluation is not a standalone function. It relies on supporting management systems to provide details on the intervention being evaluated, information on the context in which it is implemented and data on key outcome and impact indicators. To be evaluated effectively, interventions must be able to demonstrate in measurable terms the results they intend to deliver. Programme design, monitoring, performance and knowledge management systems as well as tools for performance measurement complement evaluation and are prerequisites for high quality, efficient evaluations.

Well-designed and effectively monitored programmes will increase the likelihood that social investment has the desired impacts, while making evaluation easier and cheaper. An organisation’s capacity to manage for results depends on having programmes that have clearly defined intended results and a basic understanding of the factors affecting how those results are achieved. Too many projects and programmes cannot be evaluated credibly because of the way they were designed or implemented. Many interventions are ill-conceived, and their goals are not well-articulated; risks are not properly identified and assessed; intervention theories are too general; appropriate indicators are not clearly defined; baseline data is missing; there is no representative sampling; or it is not clear whether the activities were implemented as intended.

The result of such weaknesses in programme design and implementation is that evaluation studies cannot report sufficiently on results at the level of outcomes or impact. When an evaluation is commissioned, additional time and expenses may be incurred to fill the gaps.

Too many projects and programmes cannot be evaluated credibly because of the way they were designed or implemented.

D. Be realistic about the outcomes of evaluations

It is important that funders and programme managers have realistic expectations about the types of questions evaluation can help answer. While evaluation is a useful input, it cannot by itself solve all problems of learning and accountability a funder or development agency may face. Not everything needs to be evaluated ongoingly. Evaluation topics should be selected based on a clearly identified need and link to the organisation’s overall strategic intent and management objectives.

Social investors are increasingly pursuing more complex, longer-term objectives in partnership with many other actors. Individual donor activities can rarely be linked to results in a simple way. And yet funders are also under pressure to demonstrate the results of development assistance. Many are trying hard to report progress on high-level results and to link outcomes to specific activities they finance or have supported. There is widespread insistence on seeing tangible results and for making every cent count. Sometimes this creates unrealistic expectations for evaluators.

Evaluators are increasingly asked to assess high-level impacts in unrealistically short timeframes, with insufficient resources. Too often this results in reporting on outcomes and impacts that are only loosely, if at all, linked to grantmakers actual activities. In the worst case, this kind of reporting ignores the broader development context, including the role of the government, the private sector or civil society groups, as if the social investor worked in a vacuum.

Evaluations should use suitable research and statistical analysis methods to answer the evaluation questions and assess the development activity in terms of relevance, effectiveness, efficiency, impact and sustainability. There are various evaluation methods and approaches. The purpose (objective) and scope of an evaluation – what it aims to find out – will determine which tool is most appropriate.

There is no single best method for evaluating all development interventions. Experience shows that evaluations are most likely to be useful to those using them when the methodology fits the questions (what we want to find out), the context and the programme that is evaluated. The important element is flexibility in selecting an evaluation approach that is best suited to each particular evaluation. Such “best fit” methods are most likely to produce useful, relevant, credible evidence. An overly rigid approach can stifle critical thinking.

E. Evaluations should be collaborative and representative

Evaluation can be a useful entry point for working with partners because it provides an opportunity to discuss and build consensus on strategic objectives, as well as to analyse critically the approach to development and assess shared results. Collaboration in planning and carrying out evaluations can create a basis for mutual accountability. Collecting evidence from intended beneficiaries and local partners can also help ensure that the evaluation analysis accurately reflects realities on the ground.

Therefore, evaluations should involve all stakeholders, governments, other funders, intermediaries and beneficiaries as suited to the topic at hand. Other ways of collaborating include sharing data, reports or context analyses, commissioning syntheses of evaluation findings, meta-evaluations and peer reviews, and developing common definitions and methodologies.

Capacities to produce and use evaluative evidence are required, not just to ensure that donor and investor funds are well-spent, but also to help make public institutions and policies more effective at reducing poverty and stimulating economic growth.

F. Use evaluations to inform decisions

Evaluation practices and findings should influence the decisions and actions of development policymakers and senior managers. To achieve this, procedures should be put in place to ensure that appropriate and timely actions are taken by those responsible for strategic planning, programme design and oversight. This usually involves a formal management response system that requires senior executives or programme managers to respond to each evaluation finding and recommendation.

More informal feedback loops are also used to help managers learn about and use evaluation findings. Reporting on and monitoring the implementation of management responses is crucial to guarantee follow-up and highlight useful changes that have been made as a result of evaluation.

To achieve the desired goals of evaluation, the findings and conclusions must be communicated effectively. Effective communication involves delivering messages and presenting evidence in a clear, easily understood way that is immediately accessible to stakeholders.

 Good communication supports learning and the use of evaluation findings. Communication also entails being transparent about results. Evaluation can bring hard evidence and a credible, independent voice to public education campaigns on development. By demonstrating that the social investor is candid about what it is achieving with shareholders’ money, good communication based on evaluation evidence can reinforce the credibility of the development programme and increase public awareness about development.

Simply put, a low-quality evaluation will not be useful. Poor evidence, flawed analysis or the wrong advice can do more harm than good.

Author Reana Rossouw is one of Africa’s leading experts on social innovation, sustainable development, shared value and inclusive business strategies. As director of Next Generation Consultants, a specialised management consultancy, she believes strongly in contributing to the development and capacity-building of the sector. She presents practical, useful, interactive master classes on (1) Stakeholder engagement and management, (2) Human rights management, (3) Strategic social investment and (4) Monitoring, evaluation, impact and return on investment assessment. See the full brochure (including dates and booking details).

Related Articles


IMPACT AT SCALE

Development work is hard. Making change happen at scale is complex and there is not a linear path. Yet, achieving impact at scale is a goal many grantmakers, philanthropists, social and impact investors aim for – but this goal is not o...

How to develop a theory of change

A good theory of change gives you choices and helps you make better decisions, in addition - it is a tool that can be used by so many stakeholders, both internally and externally to an organisation as it explains clearly and succinctly...

Man and woman smile and high five in a meeting

Building consensus through a new social compact

In South Africa there has been much talk about the need for a new social compact. Distrust of government and political parties is on the rise and the NGO-sector has lost credibility with scandals rocking organisations like Treatment Acti...

Diverse professionals touching hands around a table

Blended Finance

Innovative and blended finance as a route to greater social impact and sustainabilityThere are three ways in which private resources can be harnessed for meeting societal needs: Through charitable giving and philanthropy Through corporat...

PROVING IMPACT THROUGH EVIDENCE

‘Impact’ is the word of the moment in philanthropy, charity and even impact investing. Donors want to have an impact with their giving, charities want to know if their work is making a difference, and investors want their investments...

Diverse professionals looking at graphs

How to develop an effective impact report

Communicating impact is growing in importance as more and more stakeholders in the development sector begin to see the value in sharing their journeys and success stories. In a previous article we touched on the importance of transparenc...