What is wrong with current monitoring and evaluation practices?
Questions about whether development programmes “work” have preoccupied the development sector for a long time. In today’s reality of continued economic hardship, coupled by diminishing resources and mounting sector scepticism, development practitioners face even more pressure to demonstrate the results, change and impact of their activities. Management consulting in this field therefore has to include monitoring and evaluation.
Measurement practices – monitoring, evaluation and impact assessment – play an invaluable role in providing answers to complex development issues and questions. Lessons from past evaluation studies have shaped our current approach to social, socio-economic and community development.
The field of monitoring, evaluation and impact assessment continues to evolve – reflecting as well as influencing trends in development thinking.
New methodologies and ways of working are being developed to respond to four key pressures:
- Understanding (quantifying and qualifying) development impacts
- Defining funder returns
- Enhancing mutual accountability
- Improving development practices
Development practitioners agree that performance measurement is only effective when it makes a real difference by delivering credible evidence on what works and why, to feed into policies, strategies and programme management decisions.
Aiming to define what’s wrong, the question is why, notwithstanding the time and resources spent on monitoring and evaluation, are we not moving the needle to more effective grantmaking practices?
We’d like to share key steps for strengthening learning from monitoring and evaluation:
- Integrating evaluation findings or outcomes with policy-making and strategic planning at all levels of the organisation.
- Using evaluation as an important instrument for knowledge management.
- Setting a positive tone from the top where senior managers show support for learning and accept both positive and negative evaluation findings.
- Creating incentives and systems to ensure that learning becomes part of business as usual in the organisation, including lessons from others’ evaluations.
Good evaluation starts with clearly defining its contribution to planning and achieving organisational objectives and development goals. The role of evaluation is to provide credible, independent evidence about the relevance, effectiveness, efficiency, impact and sustainability of development activities and investment. This information is used to support learning, test theories about how development results can be achieved, strengthen programme design and management, and inform decision-makers. Evaluation should meet development partners’ and social investors’ need for good evidence and help staff and management design better programmes and strategies. It also informs funding decisions: Allocations can be contingent on a programme demonstrating it can achieve success. Evaluation is part of holding funders and development partners accountable for results.
Failure to understand what is being achieved (or not) can lead to ineffective, mistargeted or poorly implemented development practices. These are not only a waste of money, but they are also costly in terms of lost lives and livelihoods for those who are meant to benefit from development support – people around the world who live in poverty and exclusion.
The following are lessons Next Generation has learned over the past decade after conducting numerous impact assessments of development programmes. We use monitoring and evaluation reports as a critical part of and input into the impact assessment process. However, notwithstanding the copious amounts of data we study and analyse, we still see programmes that deliver little or no impact and return, or the impact is unintentional, negative and short-lived.
This has led us to believe that:
- Notwithstanding the time and money invested in current evaluation practice, the data is not used for programme decisions or funding decisions – no learning is evident from the outcomes of evaluation reports.
- Evaluation is not directed purposefully and there is a mismatch between expectations and resources. Even though evaluation may have taken place, organisations are either not skilled or capacitated in evaluation practice and too little money is set aside for proper and meaningful evaluation.
- Evaluators don’t use suitable research and analysis methods and tools and although various evaluation methods and approaches exist, there is little depth in the information provided through evaluation reports. There is little or no interpretation of results and outcomes, and little or no analysis of data. It seems that evaluators simply prefer quantitative data to qualitative data, and the interpretation thereof is simply too complicated and time consuming.
- Once evaluation reports are delivered, there is little internal understanding, discussion or learning from the findings. It seems that the delivery of the evaluation report is the end of the programme to be filed, without acting on or interpreting the findings.
It’s clear that the development industry has its work cut out in this regard. So whereto from here? Click through to “Towards better M&E practice”.
Author Reana Rossouw is one of Africa’s leading experts on social innovation, sustainable development, shared value and inclusive business strategies. As director of Next Generation Consultants, a specialised management consultancy, she believes strongly in contributing to the development and capacity-building of the sector. She presents practical, useful, interactive master classes on (1) Stakeholder engagement and management, (2) Human rights management, (3) Strategic social investment and (4) Monitoring, evaluation, impact and return on investment assessment. See the full brochure (including dates and booking details).