Do small charities typically report higher program-spending ratios than large charities?

Checked on December 17, 2025
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

This fact-check may be outdated. Consider refreshing it to get the most current information.

Executive summary

Smaller charities often face higher per-dollar overhead and fundraising costs because fixed startup and operating expenses are spread over fewer donations, while larger charities usually gain economies of scale; multiple practitioner guides and watchdog-adjacent sources note that smaller organizations “may have higher ratios initially” and that larger groups benefit from scale [1] [2]. Critics and researchers warn that overhead/program ratios are imperfect signals and that donors and funders increasingly reject simple cutoffs in favor of context [3] [4].

1. Why size matters: fixed costs and economies of scale

Smaller nonprofits typically carry the same basic fixed costs as larger ones—staff, accounting, rent, fundraising infrastructure—but must spread them across a much smaller revenue base, which raises fundraising and overhead ratios; fundraising guides explicitly say “Smaller nonprofits may have higher ratios initially due to fixed costs” and that “larger nonprofits often benefit from economies of scale” [1]. Charity benchmarking pieces and accounting advisers reinforce that program expense ratios measure cents of every dollar going to mission work and that scale changes how those cents are allocated [2] [5].

2. What watchdogs actually measure and how they treat ratios

Major charity raters calculate program-percentage and efficiency ratios from audited statements and Form 990s, then sometimes adjust reported figures (CharityWatch’s Program % example) to compute “program” versus fundraising and management expenses [6]. Charity Navigator and other rating systems have evolved—Charity Navigator now gives full credit to organizations spending 70%+ on program expenses, reflecting an industry benchmark rather than an absolute truth [5]. Forbes’ methodology for its Top 100 likewise relies on standardized efficiency ratios that can be applied to smaller charities for comparison [7].

3. Donor perception vs. financial reality

Surveys and opinion pieces show that donors often equate lower overhead with better stewardship: one study found people perceive smaller charities as spending a lower proportion on administration and overhead, even while viewing larger charities as more effective communicators or capable [8]. Academic and policy researchers caution this intuition is misleading: overhead ratios are average measures and can misrepresent marginal effects of donations because fixed costs distort the ratio—meaning donors penalize necessary investment that may enable greater impact later [4].

4. Why ratios can be misleading and how that affects small groups

Researchers argue overhead ratios are “nearly useless” as solitary guides because they are averages, not indicators of the marginal impact of an additional donation; a small charity investing in staff or technology can show worse short‑term ratios while improving long‑term effectiveness [4]. Practical nonprofit advisors also stress that context matters: program ratios vary by mission, funding model and life stage, and benchmarks that work for large hospitals or international NGOs aren’t appropriate for new local groups [9] [2].

5. Practical implications for donors and funders

Donors using simple program‑spending cutoffs risk excluding small organizations that need startup investments to scale impact; several voices in the sector and funders like major foundations have argued against strict low‑overhead expectations and sought to fund overhead explicitly [3]. At the same time, watchdogs still provide efficiency metrics because they’re useful starting points; the guidance from Charity Navigator and others about thresholds (e.g., 70% program spending) should be treated as one input, not a definitive judgment [5].

6. Competing perspectives and what’s missing from reporting

Industry guides and fundraisers emphasize scale and benchmarking [1] [5], while researchers demonstrate the statistical shortcomings of overhead metrics and the risk they pose to small charities [4] [3]. Available sources do not mention a single, sector‑wide empirical study that quantifies average program‑spending ratios by size class across the whole nonprofit sector; they provide theory, practitioner guidance and selected survey evidence rather than a definitive cross‑sector dataset (not found in current reporting).

7. How to evaluate a small charity in practice

Look beyond a headline program percentage: ask about trend lines over multiple years, the nature of fundraising investments (one‑time donor acquisition vs. recurring program delivery), auditor adjustments to Form 990 classifications, and whether a higher overhead year funded capacity that improved outcomes later—these are the same contextual checks recommended by raters and advisors [6] [2] [1].

Limitations: this analysis draws only on the supplied donor‑advice, watchdog and research snippets; it reflects consensus themes in those sources rather than a new statistical analysis of program ratios by charity size (not found in current reporting).

Want to dive deeper?
How do program-spending ratios vary by charity size and mission?
What accounting practices cause differences in reported program-spending ratios?
Do donor-restricted funds or shared services affect charities' program percentages?
Are program-spending ratios a reliable measure of charity effectiveness?
How do watchdogs and rating agencies adjust for charity size when scoring nonprofits?