Every year, just before Christmas, I get a certain kind of phone call from my mother. She has a hundred dollars to spend and, in the spirit of the giving season, she’d like to donate it to a humanitarian non-profit. She might have read about a recent flood in Pakistan, or the violence in Syria, or that Ethiopia is suffering from a string of bad harvests, and she wants to donate to help. The problem is, she doesn’t have a clue how to judge which organization is doing what, and how well.
She’s checked out Charity Navigator and Guidestar and read about the various NGO’s overhead ratios and their mission statements. But this information doesn’t tell her much about what the organizations do, and how far her donated dollar will go.
It’s nice to know that the organization spent only 7 cents in the dollar on overheads, she says, but what exactly did the other 93 cents achieve? Is the organization building shelters, or providing micro-loans, or disseminating health information? How does she really know that those interventions make the biggest difference in the lives of beneficiaries?
As an economist, when I hear those kinds of questions, two words immediately come to mind: cost analysis. This is the specialized term that we use to describe a fairly straightforward idea—figuring out how much it costs for each service we deliver, or change we create.
Read More: Inside the Lives of 4 Syrian Infants
What is surprising, though, is that the majority of non-profits do not commonly do this type of analysis. The humanitarian sector succeeds in tracking its finances up, down, and sideways; the International Rescue Committee (IRC), where I work, has nine separate accounting codes, leaving aside the dozens of other ways that donors ask us to track our expenses. But nowhere in that morass of data is a system for linking costs to the outputs (how many latrines we built, how many children attended our reading classes) or outcomes (how many cases of cholera were averted, how much literacy rates increased).
To figure this out for any particular program is a challenge, to say the least. As the head of the IRC’s cost analysis team, I have to sift through anywhere from 4 to 20 documents to find the data I need. If I open all of the necessary spreadsheets that allow me to know what a program spent in a certain time period in one place, my computer usually crashes.
Despite these hurdles, the IRC is committed to finding out how much it costs to deliver key interventions. This will enable us to better apply our resources and reach as many people as possible with life-saving assistance – and also answer my mother’s piercingly simple question.
We have recently completed 8 separate studies showing the cost per output for key humanitarian interventions. The initial results of these studies are instructive, and sometimes counter-intuitive. For example, we saw that there is a lot of variation in how much it costs us to deliver services. In Ethiopia, building latrines in refugee camps cost anywhere from $5 per person, per year in our largest projects to almost $100 per person, per year in our smallest ones. This gives us an actionable lesson: if we can grow the size of the smallest projects in our portfolio, we can take advantage of economies of scale and improve our efficiency.
What makes a program cost efficient isn’t always so straightforward, however. When we looked at malnutrition treatment in Africa, for instance, we expected our programs in Mali to be more efficient than our programs in Niger, because the Malian programs reached a greater percentage of local malnourished children than in Niger. When we ran the numbers, though, it cost around $230 per child served in Mali but only $100 per child served in Niger. This is because in Niger, the prevalence of malnutrition is much higher than in Mali. Even though our Niger programs reached a smaller percentage of malnourished children than in Mali, they reached a higher number of children because there are simply so many malnourished kids in Niger. This made the Niger programs more efficient.
For the IRC, figuring out how to spend resources in order to maximize reach and impact will not be a one-off exercise confined to a report; rather, it will serve as a part of the daily practice of the organization. We are building tools that mean that when a project coordinator completes their quarterly expense report, it will take them minutes, not days, to match that spending against the outputs or outcomes the project achieved. This will allow them to use this information to move resources in real-time to where they are most valuable. Those quick analyses will feed upward to dashboards read by country directors and donors, who can then learn faster and in greater detail than ever before the kinds of programs that make the most progress towards improving people’s lives.
All this is still a work in progress. It’s hard to convince people to abandon a metric, such as overheads versus program spending, that has been a staple of fundraising for years, especially when making that switch may mean exposing some less-than-flattering results. But we won’t make much progress in reforming humanitarian aid if we keep thinking about costs in overly simplistic ways. We have to do the dirty work of understanding how what we spend relates to what we achieve—we owe this to our beneficiaries and our donors.
Eventually, the information we come up with will replace the vague overhead ratios and mission statements that currently constitute “aid transparency.” This, I hope, will improve the effectiveness and efficiency of humanitarian interventions, not just at the IRC but across the sector. And my mother will no longer have to keep phoning me to try and figure out if her donation will make a difference this holiday season.
Caitlin Tulloch is a Technical Advisor at the International Rescue Committee, where she analyzes the efficiency of social programs in areas affected by conflict or extreme poverty.