Are there notable differences in worst-president rankings across political ideology, race, or era?

Checked on December 20, 2025
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

This fact-check may be outdated. Consider refreshing it to get the most current information.

Executive summary

Yes — rankings of “worst” U.S. presidents vary noticeably by era and by the audience doing the ranking, with smaller but measurable splits by political ideology and persistent effects tied to race and the demographic makeup of evaluators; public polls and partisan outlets often produce very different bottom lists than historian surveys, and Civil War–era presidents repeatedly cluster at the bottom of expert rankings [1] [2] [3] [4].

1. Historical surveys converge on certain eras, especially the Civil War lead‑up

Across multiple historian-driven compilations, presidents associated with the nation’s mid‑19th‑century crisis — James Buchanan, Andrew Johnson and Franklin Pierce — recur among the worst, a pattern explained by scholars who treat leadership during national fracture as decisive; that Civil War era dominance shows up in historian lists and topical retrospectives [2] [1] [5] [3].

2. Ideology colors but does not wholly determine expert rankings

Surveys that partition respondents by political leaning show differences but not wholesale reversals: some exercises found only “small differences” between self‑identified liberal and conservative historians and both groups agreed on nine of the top ten presidents in one survey [6], while other projects report sharper partisan splits — for example, Democratic-leaning scholars rated George W. Bush much worse than Republican scholars who rated him among the better presidents, producing an “average” middle position for him [6]. The 2024/2025 era introduced Donald Trump into many expert surveys and he appears at or near the bottom in aggregated historian rankings, even as self‑identified Republican historians sometimes place him higher than non‑Republicans do, illustrating how ideology nudges but rarely flips the overall expert consensus [6] [4].

3. Public opinion and partisan media produce different “worst” lists

Crowd‑rankings and public‑opinion polls routinely produce lists that diverge from academic surveys: U.S. News synthesized multiple polls to put James Buchanan at the bottom [3], while some public polls of registered voters placed recent presidents like Joe Biden poorly among recent officeholders (a Daily Mail/J.L. Partners poll cited by an outlet claimed Biden ranked worst among nine recent presidents in that sample) — demonstrating that voter sentiment, short‑term controversies and partisan media framing can elevate contemporary presidents on “worst” lists even when historians place them differently [7] [3].

4. Race, evaluator diversity and implicit biases shape outcomes and interpretations

Efforts to diversify survey panels change some placements and provoke reexamination: C‑SPAN’s 2021 effort to reflect greater diversity in race, gender and philosophy altered certain rankings and commentators warned that longstanding prejudices remain because many high‑ranked presidents were slaveholders or otherwise implicated in racial violence, a fact that complicates consensus about “greatness” or “failure” [8]. Siena’s methodology intentionally does not ask respondents to name their ideology to limit partisan signaling, but critics still argue that ranking instruments and question framing can favor certain policy legacies tied to racial and cultural assumptions [9].

5. Methodology and institutional agendas matter — and produce divergent bottom lines

Different projects use different frames — expert surveys, averaged media polls, crowdsourced lists and ideological blogs — and each brings an implicit agenda: academic projects emphasize long‑run institutional performance and historical context (C‑SPAN, Siena), journalistic aggregates emphasize headline comparisons (U.S. News), and partisan outlets or founder op‑eds often weave policy judgments into moral narratives (examples include NW Progressive and Hungarian Conservative pieces that recast reputations through ideological lenses) [1] [9] [10] [11]. Where methodologies are transparent, readers can see why era, sample composition and question wording push one president into the “worst” bin in one list and only middling in another [6] [3].

6. Bottom line: differences exist and are interpretable

Taken together, the reporting shows that era is the strongest predictor of shared “worst” judgments (Civil War era failures), ideology produces measurable but limited divergence among experts, and race—both as subject matter and as a feature of evaluator composition—shapes rankings and the interpretive arguments that follow; partisan outlets and public polls amplify short‑term grievances or narrative aims, so the label “worst” depends as much on who is asked and how as on the historical record itself [2] [6] [8] [3].

Want to dive deeper?
How do C‑SPAN and Siena College methodologies differ when ranking U.S. presidents?
Which Civil War–era presidential decisions most influence historians’ negative rankings?
How have partisan media outlets’ worst‑president lists changed across the last three presidential administrations?