How do federal auditors estimate improper payments and how do those methods affect national comparisons to Minnesota?
Executive summary
Federal auditors estimate improper payments primarily by selecting statistically valid samples, testing cases against program rules and documentation, and projecting error rates to populations—practices shaped by OMB guidance but implemented unevenly across agencies and states, producing large measurement variability in government-wide totals [1][2][3]. Because some high-risk, state-administered programs lack consistent state data or standardized methodologies, national estimates can be materially different from what an individual state like Minnesota might actually experience or report, and the public record does not provide a definitive, comparable Minnesota figure in the cited reporting [4][3].
1. How auditors build estimates: sampling, testing, and projection
Auditors most commonly rely on statistical sampling when it is impractical to verify every payment: they select a representative sample of payments or cases, review documentation and controls for each sampled item, calculate an error rate, and project that rate to the full payment population to produce a dollar estimate of improper payments [1]GAOREPORTS-GAO-06-347/html/GAOREPORTS-GAO-06-347.htm" target="blank" rel="noopener noreferrer">[5]. Agencies report using statistically valid approaches for most major programs, and when multi-step payment processes are involved auditors may sample only the “high‑risk” steps rather than every transaction, which alters what the projected estimate captures [6][2].
2. Method choices that change the headline numbers
Methodological decisions—not just actual wrongdoing or system failures—drive much of the variation in reported improper payments: whether nonresponse cases are treated as improper, whether recovered amounts are subtracted from estimates, the definition of what counts as “insufficient documentation,” and which program steps are sampled all change the final estimate [2][7]. GAO has repeatedly recommended clearer OMB guidance on issues like nonresponse treatment and testing approaches because inconsistent agency choices can understate or overstate program error rates [2].
3. State-administered programs and the data gap problem
A persistent complication is that many large programs are administered by states, and federal estimates depend on states’ willingness and ability to provide the underlying data and participate in standardized measurement efforts; GAO found that some state-administered programs were not included in government-wide totals because of data or methodological gaps [4][8]. Pilot efforts—such as single-audit pilots for TANF where auditors reviewed 208 cases to test controls—show the feasibility of state-level sampling but also expose how varied state practices and statutory limits impede consistent national aggregation [5][8].
4. Analytical tools, recovery audits, and incentives that skew comparability
States and auditors increasingly use data analytics, recovery auditing, and federal tools like the Treasury’s Do Not Pay database to detect potential improper payments, and some states receive program-specific incentives or penalties (for example in the Food Stamp program) that affect reporting and remediation behavior; these operational differences change the observed error rates across jurisdictions and complicate apples‑to‑apples comparisons with a state such as Minnesota [4][9][10]. Recovery auditors focus on anomalies like duplicate payments or invoice errors, which can boost identified recoveries in some programs while other programs continue to report large projected improper payment totals [7].
5. What this means for national-to‑Minnesota comparisons and reporting limits
Because national improper‑payment figures are built from program‑by‑program methodologies, agency choices, and uneven state participation, a national estimate can differ substantially from what Minnesota’s auditors might find if they used different samples, treated nonresponses differently, or applied data‑analytic tools at different intensity; the reporting consulted does not include Minnesota-specific estimates or standardized state-to-national reconciliations, so definitive comparisons are not available in these sources [4][3]. GAO and others have recommended more prescriptive OMB guidance, expanded state‑federal coordination, and authority for agencies to require needed state data—measures intended to make national and state figures more comparable, but not yet fully implemented [11][3].