Zero Reasons Not to Measure Impact

Zero Reasons Not to Measure Impact

 

 I’ve just had another person send me a link to the Stanford Social Innovation Review (SSIR) article by Mary Kay Gugerty & Dean Karlan titled Ten Reasons Not to Measure Impact – and What to Do Instead. I thoroughly enjoyed the article, but as the comments below it state, the title is misleading. Had it, instead, been called Ten Reasons Many Organizations Should Not do Strict Randomized Controlled Studies, I would have no issue with anything in the article at all. But as it is, I hate the title.

Full-blown impact studies using controls can take months and many, many thousands of dollars. I believe that it would be wasteful for most charities in Canada to undertake such studies. As the article points out, money spent on research that doesn’t help is money wasted.

The key difference between their title and the title of this article is the definition of “measuring impact”. If we were to use their wording, “collect good monitoring data that informs progress”, as the definition of measuring impact, then my title would have been more appropriate for their article. Each of their ten reasons has to do with what I believe are mostly wasteful, overblown studies that are not required in the vast majority of cases.

It is often said that Good is the enemy of Great. In the world of charity program evaluation and reporting, I would argue that Great is the enemy of Good. Charity leaders believe that to get great data, it would cost too much time and money that they don’t have, that they would rather spend on helping their clients.

However, the crux of it, to me, is that good data is exactly what will help clients the most. Those charities that we have found to have the most impact on their clients collect good data. It may not be randomized controlled data (almost never is, nor should it be) but it is good enough that they can understand what is happening because of their programs and what happens when they change things in an attempt to improve their programs. That’s the main reason to collect data – to continually improve program outcomes and to make sure that you are using donor dollars to create as much impact as possible.

It could be argued that we are simplifying things too much, that you cannot adequately understand the impact of a charity without some sort of strict control group evaluation. But I do not believe that, and I worry that this is one of the key reasons that charities are NOT collecting the right data. It is too intimidating to get Great data.

Good data would let you see how much you spend to help someone. And Good data would then allow you to compare that to what happens to that person. Do they improve their health? How much? Do they become employed? How much more money are they making? Are they housed in a better situation than they would have been in? Did you provide them food? How much? Answers to relatively simple questions will allow the charity to understand how much difference they are making with the dollars they spend and if changes in programs are improving the outcomes. And, as Kate Ruff and Sara Olsen argue in another SSIR article, The Next Frontier in Social Impact Measurement Isn’t Measurement at All, this data will also allow analysts to determine how much change the programs are creating.

We have analyzed over 200 charities, looking to determine if they are creating a lot of value with the money given to them, or if they are creating a little value. We are not splitting hairs, wondering if, for a $100 donation, the charity has produced $210 or $230 of value. That is immaterial, and it is not cost-effective to be able to understand that difference. But if you can compare two charity programs, A & B, where program A is creating between $150 and $250 worth of value per $100, and program B is creating between $300 and $400 worth of value, it’s an easy decision.

Most charities do not currently have the data to do this. Some are focused on hair-splitting, which can paralyze a charity, and most are just counting bodies. They could calculate that it cost them $1,345 to help each of their clients, but they have no idea about the associated value created by helping this client. Is it close to $1,300?  Could it be more – $2,000, perhaps?  Or are you really making change and it’s more like $5,000? With a bit of relatively inexpensive data on clients, most charities could provide data that would allow for such necessary calculations.

I applaud Gugerty & Karlan’s article, but I frown on their title. At least for those people who have, likely blindly, sent it on to me, it perpetuates the notion that there are so many reasons not to do impact evaluation that we may as well just forget it. We don’t need any more reasons not to measure impact.

Just my thoughts,

Greg

 

Greg Thomson

Ci Director of Research

Print