In 2022, our colleagues at the International Rescue Committee (IRC) were struggling to address one of the world’s most profound child health crises: hundreds of thousands of children in Chad suffering from acute malnutrition—what UNICEF called a “silent emergency.”
We had dollars to spend—nearly $3 million to provide life-saving treatment to 30,000 children in one of the most vulnerable provinces. But we had a problem: we were running resource-intensive malnutrition screening campaigns but not actually finding many malnourished kids. The cost was staggering—nearly $100 per child admitted for treatment—roughly equivalent to our entire project budget.
Facing failure, we needed alternative approaches—and we needed them to be much more affordable. The team had an idea drawn from their on-the-ground experience: Why not piggyback off existing immunization outreach, leveraging infrastructure and turning malnutrition and immunization services into a one-stop approach? We quickly reallocated dollars and assessed how costs stacked up against other options. The one-stop idea worked. Since we made this shift, we’ve halved admissions costs and treated thousands more kids.
What we saw in Chad is a powerful example of a deceptively simple idea: use cost evidence to help more people. It’s common sense—but getting there has been a journey.
Are you enjoying this article? Read more like this, plus SSIR’s full archive of content, when you subscribe.
A Moment of Peril—and Dwindling Dollars
The humanitarian sector faces a massive funding shortfall. Over 55 percent of need, or $32 billion, went unfunded in 2023, and 88 percent of all humanitarian funding comes from just 10 government donor budgets—a precarious formula.
At the same time, the people we serve face enormous overlapping challenges. Crises like conflict, climate change, extreme poverty, and the COVID-19 pandemic have left more people forcibly displaced around the world right now than at any time since the end of World War II, often compounding one another in contexts like South Sudan, Afghanistan, and Yemen. These are further exacerbated by new crises, like conflict in Ukraine and Gaza.
Many donors still operate on funding cycles that are inadequate and inappropriate to meet these polycrises. For example, the average grant cycle in the humanitarian sector is 12-18 months, while the average conflict (and time a refugee is in exile) is roughly 10 years.
More money is one answer, but realistically there will always be a gap between need and funding. How can we do more for the nearly 300 million people worldwide who need aid? We need to maximize impact for every dollar spent.
A 10-Year Cost Evidence Journey
Over a decade ago, the IRC began to stitch cost evidence into the fabric of our work. We were already conducting impact evaluations to rigorously assess the effectiveness of our work. But if we were tracking impact, why not cost? We needed data to make evidence-informed decisions on where to spend more and where to save more to maximize our impact.
It made sense, but it was controversial. We were called unethical. We were told dollar signs detracted from the moral obligation to save lives. Our view, though, was that funders and organizations were already making implicit
tradeoffs; why not make them explicitly,
with thoughtfulness and rigor?
In 2015, the IRC created a “Best Use of Resources” (BUR) team
to measure our cost-effectiveness, and established an organizational evidence-based cost methodology to compare the costs of different programs with what they achieve. We built BUR with three goals in mind: Establishing efficiency guidance by generating, synthesizing, and using evidence to inform program decisions; pioneering and analyzing new approaches; and further optimizing effective programs for impact and scale.
We then embedded this work in three ways:
First, we put our money where our mouth was. We dedicated resources and staff to cost analysis, believing that this investment would pay dividends in the years to come. IRC’s cost evidence work gained real traction when we created a standalone division with unrestricted funds.
Second, we adapted key internal decision-making processes to require consideration of cost evidence. To give this work teeth and drive accountability, we initially included quarterly costing targets as key performance indicators. We have since retired this rigid and short-term system, but it set a marker and accelerated our journey to centering costing in program design and implementation.
Third, we created a public good for the sector. We recognized that the IRC was a small part of this story. The biggest gains would come from others making major budget allocation and design decisions based on comparable cost data. In 2018, we distilled our methodology into a product called Dioptra, a software tool for rapid, rigorous, and standardized cost-efficiency analyses. Dioptra uses project data and program staff knowledge to calculate the cost per output of any intervention. What started as a tool is now a consortium and a community: nine organizations with a combined $7 billion annual budget have adopted Dioptra.
Together, these initiatives have established a base of cost evidence and a culture of costing at IRC and beyond.
What We’ve Learned
More than 10 years on, this work has shown we can stretch dollars to reach more people. To date, we have conducted over 400 cost analyses across 37 countries to directly examine more than $300 million dollars of humanitarian spending.
Earlier this year, we examined $86 million worth of IRC projects influenced by our cost evidence work. Of the $86 million, we found approximately $28 million in cost gains—a 32 percent efficiency improvement. We achieved this on a modest budget of $1.4 million, equivalent to a 20x return on investment. Some of what we found:
- Scaling cash assistance generates substantially more bang for buck.
In 2022 (most recent data), the world spent $8 billion on cash assistance programs, equivalent to 17 percent of international humanitarian aid. Using cost-efficiency data from more than 30 cash distribution programs in 17 countries, we found that, for small projects, delivery costs were often so high that recipients received less money than it cost the IRC to deliver that cash. For the same amount of money, a large-scale program can reach roughly twice as many people. Based on this evidence, BUR developed guidance for programs to reach at least 1,000 households to maximize the impact and reach per dollar. - Not all refugee employment interventions are equal.
Around 60 million displaced people live in cities, but we don’t know how to best support them economically. In partnership with the IKEA Foundation, the IRC launched Re:BUiLD, a 30 million Euro ($33.4 million) livelihoods program targeting 20,000 refugees and their host community members in Kampala and Nairobi. Since we didn’t know what might work, we took a multi-pronged approach and coupled this with ongoing analysis. Initial findings suggested that some of our approaches, such as vocational training, were unlikely to be as cost-effective for refugees in these contexts as our benchmark of simply providing business grants. This prompted us to deprioritize some interventions in favor of those we believed could reach more refugees with greater impact per dollar spent. This initial shift already represents an approximately 30 percent savings opportunity, and we will continue to refine our approach as more rigorous evidence comes in. If similar findings applied to programs targeting just 1 percent of urban refugees, it would result in $230 million in cost-efficiency gains, equivalent to reaching around 200,000 more people. - Dollars for child nutrition can go further. At any given time, 45 million children under 5 years old worldwide are acutely malnourished. In 2020, the IRC and GiveWell, a foundation highly focused on cost-effectiveness, began collaborating to determine whether there might be opportunities to efficiently deliver malnutrition treatment in areas with high need and low funding. We co-designed $20 million worth of projects across five Sahel countries at an average cost of $150 per child—around half the historical cost of IRC’s malnutrition programming. These savings are due in large part to cost-conscious planning, such as focusing on reaching more children in one area versus spreading ourselves thin over many areas. We also dedicated ongoing cost analysis to this multi-year project. This enabled us to address cost challenges and identify opportunities (such as those in Chad) that we estimate allowed us to increase our overall reach by an additional 11 percent.
These were real savings that meaningfully transformed lives and stretched limited funds further.
A Movement, Not a Moment, for Cost Evidence
We’ve proven the concept. We’ve demonstrated cost evidence is applicable across diverse contexts and cost-effective programming is possible in a big organization. And we’ve shown that when done in a focused way, cost evidence can help hundreds of thousands more people.
With crises mounting and budgets dwindling, the challenge now is to dramatically scale the use of cost evidence across the humanitarian and nonprofit landscape. Here’s where we go from here.
At IRC, we’re doubling down. We’ve committed to increase our cost-efficiency by $225 million over the next four years, and we are focusing analysis where it can have the most impact—by making calculated bets on where additional evidence will allow us to make the biggest gains on the biggest buckets of resources. We’re currently assessing our anticipatory action climate work—cash transfers sent to affected communities in advance of predictable climate hazards—as one example where we believe more analysis will unlock more savings.
To date, our 400 cost analyses have allowed us to build a transparent evidence base. Any nonprofit can do this by generating and sharing its own evidence, as well as incorporating thoughtful use of existing evidence into its own processes and culture. Get started here.
We’re building a movement for cost-effectiveness. We believe we can unlock billions of dollars in efficiencies if there is collective action. Our Dioptra partners
are working hard to drive this work within their organizations, but we cannot do it alone. We want to work with policymakers to incorporate cost evidence into global policies, such as simplified approaches to treat malnutrition. We also want to support local governments to make globally informed, locally grounded
decisions. This is the reason we’ve developed a tool for ministries of health to plug in their own malnutrition treatment costs and parameters. Critically, we want to align incentives at the source of the funding stream by supporting donors to embed cost evidence in their decision-making.
Ultimately, though, we need a new compact with donors.
Today, we find ourselves at the same inflection point with cost-effectiveness that impact evaluations faced 10 years ago—everyone wants one, even if they’re not sure what they’ll do with it. Some of the most well-intentioned offenders are donors: many organizations generate piecemeal evidence for the sole purpose of donor compliance, rather than any learning objective. For the humanitarian sector to truly embed cost evidence into its decision making, funders need to step up. This means:
- Financial and political backing to generate and share comparable cost evidence, such as support to scale Dioptra. Funders can help foster a culture of collaboration and learning, not competition.
- Incorporating cost evidence into the Requests for Proposals they design, as well as the funding decisions they make.
- Flexible long-term project funding. No matter how big the evidence base, neither donors nor implementers will ever design a perfect program on paper: we need time and leeway to learn, refine, and scale our work. Multi-year funding is better for people and better value for donors.
Shifting donor relationships may seem quixotic, but we’ve already seen this in action: with support from private donor Open Philanthropy, USAID partnered with us to embed an IRC cost advisor into its office, as well as for Dioptra members to conduct 100 cost analyses on program areas where USAID believes it is leaving cost gains on the table. We believe this is strong precedent for a sector-wide shift.
A Moral Imperative
At the start of our work on cost evidence, the arguments against us were value-based, and about a “race to the bottom” to defund “expensive” programs and organizations. But our colleagues in Chad showed the opposite. Smarter spending is a race to reach the most people with the most impact. It is a moral imperative.
In “The Moral Case for Evidence in Policymaking,” former Hewlett Foundation development economist Ruth Levine (now at the Packard Foundation) writes:
“Values alone are not enough to achieve distributive justice—and that’s where the evidence comes in… Fairness can be achieved only if full and unbiased information is available about current conditions, and about the costs and benefits of one way of acting—one policy option—versus another.”
We agree. Embracing evidence is a choice to keep impact and people at the center of our work. The humanitarian sector needs more funding to meet devastating needs. But it could reach millions more people—with the resources currently available—if we focused on more cost-effective interventions and more cost-efficient delivery. We have proven it is possible, we know what we need, and we have a moral imperative to act.
Support SSIR’s coverage of cross-sector solutions to global challenges.
Help us further the reach of innovative ideas. Donate today.
Read more stories by Justin Labeille & Jeannie Annan.