How do charity watchdogs (CharityWatch, Charity Navigator, BBB Wise Giving Alliance) calculate and adjust administrative and overhead ratios?
Executive summary
CharityWatch, Charity Navigator and the BBB Wise Giving Alliance each use ratios that divide program, fundraising and administrative dollars to judge financial efficiency, but they differ sharply in which ratios they publish, how they calculate inputs, and how aggressively they reclassify or adjust reported figures before scoring (CharityWatch adjusts reported figures and calculates Program % and Cost to Raise $100; Charity Navigator updated its model in 2023; BBB recommends program commitment thresholds) [1] [2] [3].
1. CharityWatch: deep forensic adjustments, two endpoint metrics
CharityWatch does not simply recite a charity’s tax-form numbers; its analysts perform an in‑depth financial review and make explicit adjustments—reallocating employee compensation among program, management & general, and fundraising, treating non‑cash gifts, high asset holdings, and joint costs according to internal rules—then report two end metrics: Program % and Cost to Raise $100, which drive its A+ to F letter grades [1] [4] [5].
2. Charity Navigator: program percentage central, overhead ratio de‑emphasized
Charity Navigator’s public-facing change in 2023 removed the administrative‑expense ratio as a central rating input and now generally gives full credit to charities whose program expense ratio meets or exceeds about 70% of total expenses; the program expense ratio itself is calculated as program services expenses divided by total expenses, and Navigator has reframed emphasis toward outcomes and other measures rather than raw overhead alone [2] [6] [7].
3. BBB Wise Giving Alliance: thresholds and governance focus
The BBB Wise Giving Alliance historically uses standards that include recommended program‑to‑total expense thresholds—often cited around 65% program expense—while its reviews also emphasize governance and transparency; reviewers will flag charities that decline to provide requested information, and the BBB’s approach is more standards‑based than purely algorithmic [2] [3].
4. The math: program percentage, fundraising efficiency, and Cost to Raise $100
The basic arithmetic watchdogs use is straightforward—program expense ratio = program services expenses ÷ total expenses; fundraising efficiency is often represented either as fundraising costs ÷ contributed revenue or as cost to raise $100 (e.g., CharityWatch’s Cost to Raise $100) —but differences arise in what each group includes in “program,” “administration,” or “fundraising,” and whether in‑kind gifts or investment income are removed from denominators [2] [8] [1] [9].
5. Adjustments matter: joint costs, in‑kind gifts, reserves and allocation games
Watchdogs flag accounting choices that change ratios: charities can classify some solicitation activities as program or education, inflate program percentages via restricted grants that cover overhead, or report large in‑kind gifts that boost program dollars without comparable fundraising expense; CharityWatch explicitly adjusts for joint costs and non‑cash items, and other watchdogs warn these practices can distort comparability unless accounted for [1] [10] [5].
6. Critiques, motives and the evolving narrative about “overhead”
A long-running pushback—represented in sector commentary and a 2013 open‑letter movement—argues that over‑reliance on overhead ratios is misleading, that administrative spending can be necessary for impact, and that rating models shaped by donor demand can incentivize cosmetic accounting; both critics and some watchdogs have revised methodologies to emphasize outcomes, yet different agendas persist: donors want simple signals, watchdogs compete for relevance, and charities respond to incentives these metrics create [7] [6] [10].
7. Practical takeaway for interpreting ratios
When comparing charities, the numbers are only as honest as the treatment behind them: check whether a watchdog adjusted reported figures (CharityWatch does), whether program ratios exclude investment or in‑kind income (models differ), and whether a service like Charity Navigator prioritizes program share or broader impact metrics; understanding those methodological choices is essential to avoid being misled by superficially similar percentages [1] [2] [9].