How do Gallup and Pew differ in calculating presidential approval averages and why does it matter?

Checked on January 25, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

Gallup and Pew both report presidential approval but do so with different sampling cadences, historical baselines and survey instruments, producing averages that can diverge for reasons unrelated to public sentiment itself (for example, question wording, mode and aggregation rules) [1] [2] Ballotpedia's_Polling_Index:Presidentialapproval_rating" target="blank" rel="noopener noreferrer">[3]. Those methodological differences matter because they change the meaning of an “average” approval number, affect comparability across presidents and polls, and can influence media narratives and political strategy [4] [5] TrumpandBiden_administrations" target="_blank" rel="noopener noreferrer">[6].

1. Different clocks and baselines: continuous tracking versus periodic panels

Gallup’s approval series is built on a very long-running approach that includes periodic multiday polls for mid-20th century presidents and weekly Gallup Daily tracking during more recent presidencies (notably Obama and portions of Trump), then returning to periodic multiday polling starting in 2019, and Gallup reports averages across the entire period of a presidency when producing per‑president averages [1] [4]. By contrast, Pew’s approval reporting for modern comparisons uses its nationally representative Americans Trends Panel and annual totals of Pew surveys (Pew’s published comparison used yearly averages from its ATP for 1993–2022), a cadence different from Gallup’s daily/weekly tracking model and tied to Pew’s panel schedule [2].

2. The question and response options: consistency matters and small differences move numbers

Gallup prides itself on asking essentially the same approval question for decades, which supports long-term comparisons back to Truman and earlier, whereas multiple pollsters — and occasionally Pew in other topical questions — have shown that subtle wording changes and the number of answer choices can shift results noticeably; Ballotpedia cites research showing even small additions to a question can swing responses by many points, and notes some pollsters offer more nuanced options (e.g., strongly/somewhat) while others use a simple approve/disapprove binary [7] [3] [6]. That means two polls asking “Do you approve or disapprove…?” might still diverge if one allows “don’t know” or “no opinion,” or if the question is embedded differently in a survey.

3. Mode and sampling: telephone, online panels and the ATP distinction

Pew’s historical surveys and the Americans Trends Panel have used different modes over time — Pew notes telephone for many surveys before widespread ATP use and the ATP is a nationally representative panel with its own methodology — while Gallup’s shift to Gallup Daily involved different fielding logistics and sampling frames, producing a different mix of respondents and timing effects [2] [1]. Mode differences (telephone vs. online panel vs. mixed-mode tracking) affect who responds and when, which in turn affects averages when aggregated over weeks, months or years.

4. Aggregation choices: averaging across shocks and spikes

Gallup’s practice of calculating a single average approval for an entire presidency smooths short-term volatility — for instance, 9/11-era spikes are diluted into a presidential mean — and provides a long-run comparative metric [4]. Pew’s year-by-year or survey-by-survey approach can emphasize timing-specific shifts tied to events or the panel schedule; analysts using Pew’s yearly averages may therefore see different trajectories than analysts relying on Gallup’s presidential averages [2] [4].

5. Why these differences matter for interpretation and politics

Because the numbers are sensitive to when, how and to whom questions are asked, journalists, scholars and campaigns can pick the series that best supports their narrative: a long-term Gallup presidential average emphasizes historical comparability, while Pew’s panel- and year-based snapshots can highlight recent momentum or decline [4] [2]. Independent aggregators and publishers (Ballotpedia, FiveThirtyEight and others) explicitly note these methodological effects when combining polls, and the choice of included polls and aggregation windows can reflect implicit editorial judgments about reliability and relevance [6].

6. What remains murky and where caution is required

The sources document the broad methodological contrasts — cadence, question consistency, modes and aggregation — but they do not provide a single formula to translate one series into the other without detailed microdata; therefore direct point-for-point reconciliation requires access to raw polling datasets and weighting schemes that Gallup and Pew do not fully disclose in summary pieces [1] [2]. Users and analysts must therefore treat an approval “average” as a methodological product, not a neutral fact.

Want to dive deeper?
How do poll aggregators like FiveThirtyEight and RealClearPolitics account for differences between Gallup and Pew when reporting presidential approval?
What specific effects do survey mode changes (telephone vs. online panel) have on presidential approval estimates?
How have major events (e.g., 9/11, economic crises) historically altered Gallup’s presidential averages compared with Pew’s year-by-year measures?