How do researchers estimate the number of undocumented immigrants using the ACS and CPS?

Checked on February 3, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

Researchers typically use a "residual" approach built on Census Bureau surveys — primarily the American Community Survey (ACS) and the Current Population Survey (CPS) — that subtracts an independently estimated count of legally present immigrants from the total foreign‑born measured in those surveys and then adjusts for survey undercounts and reporting errors [1] [2]. The ACS is favored for statistical precision; the CPS is used when timeliness matters, and both require further corrections (for naturalization misreporting, emigration, mortality, and special-status groups) that drive final estimates from groups such as Pew, DHS and CMS [3] [4] [1].

1. The basic residual framework: total foreign‑born minus lawful residents

At its core the residual method compares the survey count of all foreign‑born people to a demographic or administrative tally of those lawfully present; the remaining "residual" is treated as the unauthorized population after further adjustments [1] [2]. Pew, CMS and DHS all describe this same two‑step logic: take ACS or CPS totals, subtract estimated legal entries (green cards, nonimmigrant visas, refugees, naturalizations), and treat what’s left — with caveats — as the unauthorized cohort [1] [4].

2. Why the ACS and CPS? Tradeoffs of size and timeliness

The ACS’s roughly 3‑million‑person annual sample yields much smaller margins of error and better state‑level detail, which is why major series rely on it; the CPS (especially the March ASEC supplement) is smaller but more current and sometimes used for preliminary or provisional estimates [3] [4]. Analysts caution that the CPS’s small foreign‑born subsample inflates sampling variability, so CPS‑based figures are useful for trends but less precise than ACS‑based totals [5] [6].

3. Key adjustments: undercount, misreporting, and status assignment

Because neither survey asks about legal status, researchers implement several corrections: adjusting for likely undercount of immigrants (often greater for unauthorized people), correcting overreporting of naturalized citizenship among recent arrivals, assigning legal‑status probabilities by country and cohort, and reweighting survey records so demographics align with administrative totals [6] [2] [7]. These steps include emigration and mortality adjustments and sometimes country‑specific ratios that assume relatively stable arrival patterns [2] [1].

4. How different groups operationalize the method

Agencies and think tanks follow the residual template but diverge in details: Pew and CMS publish method papers describing cohort averaging, reweighting, and survey choice (ACS vs. CPS) while groups such as the American Immigration Council or researchers following Borjas apply variant assignment rules and inclusion/exclusion decisions for TPS or DACA recipients [4] [8] [9]. Those methodological choices explain small differences — typically a few hundred thousand — between recent published totals from DHS, Pew and CMS [1].

5. Timeliness strategies and provisional estimates

Because ACS releases lag, organizations sometimes use CPS ratios or CPS‑based projections to update ACS‑anchored estimates for the most recent year (CMS’s provisional 2023 estimate is an example), multiplying CPS trend factors by an ACS‑based benchmark to approximate current counts [10]. Critics warn this increases uncertainty: CPS sampling limits and response‑rate shifts (notably during the pandemic) can produce volatile short‑term swings that may not reflect true population change [11] [5].

6. Limitations, debates and transparency needs

Counting people who by definition avoid official disclosure entails unavoidable uncertainty: sampling error, differential undercounting of unauthorized immigrants, assumptions about emigration and naturalization, and pandemic‑era response problems are all sources of potential bias [6] [5]. Methodological transparency and sensitivity testing are common practices — and the close agreement among several independent series is typically presented as corroboration, even while researchers explicitly note margins and alternatives [1] [6].

Want to dive deeper?
How do Pew, DHS and CMS differ in their specific corrections for undercount and naturalization misreporting?
What are the strengths and weaknesses of using CPS-only projections versus ACS-anchored estimates for year-to-year changes?
How do country-specific arrival and emigration assumptions affect unauthorized population estimates at the state level?