While everyone wants to be impactful, what’s missing is an agreement or understanding of what that means.
- The first thing to keep in mind is that impact usually has positive connotations.
- The second point is that impact is defined as the final, often long-term, result of a programme or investment.
- Thirdly, impact is generally described as a meaningful improvement in the lives of others.
What must be considered however in the context of impact is:
- That impact is proportional to your grantmaking or the size of the investment.
- In addition, impact is measured on both short- and long-term outcomes. It is not only about systemic change but the results and changes you can achieve along the way.
This means that to really know whether you have had an impact or not, you need to compare, or understand, what would have happened in any event, were it not for a particular programme or investment (known as the counterfactual). You also need to be able to tell whether any results have been sustained, bringing long term benefits for beneficiaries.
The only real way to know is using evidence. Evidence is independently assessed proof that a body of work, works in its intended way.
There are several different kinds of evidence, indeed there is a whole hierarchy, and different types will be appropriate for different circumstances. For example:
- Level I: Evidence from a systematic review or meta-analysis of all relevant RCTs (randomized controlled trial) or evidence-based clinical practice guidelines based on systematic reviews of RCTs or three or more RCTs of good quality that have similar results.
- Level II: Evidence obtained from at least one well-designed RCT (e.g., large multi-site RCT).
- Level III: Evidence obtained from well-designed controlled trials without randomization (i.e., quasi-experimental).
- Level IV: Evidence from well-designed case-control or cohort studies.
- Level V: Evidence from systematic reviews of descriptive and qualitative studies (meta-synthesis).
- Level VI: Evidence from a single descriptive or qualitative study.
- Level VII: Evidence from the opinion of authorities and/or reports of expert committees.
Why is good impact evidence hard to come by?
The only real way to know if you are having an impact, or not, is through evidence; but whether sufficient evidence is available is debatable. While there is certainly a lot of information and reporting out there, good impact evidence is difficult to find for several reasons:
- Lack of good data: The format of information shared by charities/investees is often inconsistent, with little indication of how the programme/investment helped beneficiaries over time. This means that organizations may be able to report on outcomes but find it difficult to prove longer-term impact.
- Lack of skills and knowledge: To develop a strong evidence base, you need data scientists and economists – skills that social purpose/investee organizations do not normally have in-house. Equally, funders/investors often do not have these backgrounds and so do not know what to look or ask for.
- Lack of funds: Most non-profit/investee organizations do not have the funds to recruit for these skills. It is more difficult for charities to secure general support funds, enabling them to step-back and assess what, and how, they are doing. Donors usually like to give ‘restricted funds’ for the delivery of specific programmes and plans. The net result is that charities have little room to take on additional work beyond delivering their programmes.
- Lack of infrastructure: Because there is no centralised platform to share evidence of high impact programmes, its use and dissemination is severely hampered.
So what is impact reporting?
If good evidence is hard to come by, then why is so much impact reporting being done, and how can it be better?
Currently charities are compiling impact reports because donors are asking for, and expect it, rather than because they see the benefits of doing so. The consequences of this are several.
- The first, is that charities tend to view impact reporting as a way to secure more funding. This has led to the rise of unethical behaviour and a proliferation of organisations claiming to measure impact, when in fact they are sharing stories and short-term outcomes, without robust long-term evidence.
- The second consequence is that it is the donor and not the charity who is deciding the impact that a charity should have. Donors come to an organisation with a specific programme or idea in mind, and expect diligent reporting and results on that, when in fact it may not be in line with a charity’s mission, nor what they would choose to direct the money toward if given the choice.
Finally, there is the question, if capturing and understanding the evidence is rarely done, what is recorded in impact reports and what are donors asking for?
- Firstly, at the moment, it is very common for different funders supporting the same charity and the same project to ask for different impact reporting with different reporting requirements. This naturally places a big burden on charities.
- Second, when we use the word impact in this context, if many charities do not have the evidence base to prove their impact, many funders do not have the expertise to know what good evidence looks like and what they should be asking for; most of what gets reported on are outputs.
Ultimately, there needs to be a more joined-up and efficient way of demonstrating impact and evidence building should not be to secure more donations. First and foremost, it should be about helping the social purpose organization understand the benefit of their work, if any, and how they can improve.
If donors have researched the evidence base for a charity’s work and feel confident in the long-term benefits of that work before donating, then perhaps they will feel more confident in making unrestricted donations and attach fewer reporting requirements to the grant itself.
What information could donors be asking for?
We have put together some recommendations that we hope are helpful.
- Find out the methodology or logic behind how the charity has collected and interpreted their programme data
- Ask about how the evidence shows, if any, the long-term benefits for beneficiaries
- Ask yourself how fantastical the claims are. If an organisation is making claims that are absolute, rather than relative, then that is worth exploring with them
- Most importantly, ask if the charity has a counterfactual or control group that their programme can be compared against so that you can understand what might have happened otherwise.
There are other important points to bear in mind. For example, some measures of impact place all the emphasis on the end result. The risk here is that the way the programme is implemented, or the way beneficiaries feel about it, is totally ignored.
To capture this information, look at two different forms of evidence:
- Process evaluation: This looks at a step-by-step process and how that process delivers certain results.
- Consistent qualitative evidence: If you want to know what beneficiaries think, this is the most useful way to do that. Generally speaking, qualitative evidence is quite low on the hierarchy of evidence we referred to, but when it comes to reporting beneficiary experience it is good evidence to use.
The world of impact and evidence is a complicated one, because achieving social change is hard. After all if it were easy then there would be no need for philanthropy. Ultimately, before deciding to fund any charity, it is useful to stop and think more deeply about the problem you wish to solve. Evidence can go a long way in helping donors and organisations to answer these questions.