Keep Factually independent
Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.
What are the most common criteria in C-SPAN presidential surveys?
Executive summary
C‑SPAN’s historians’ presidential surveys use ten fixed leadership categories, each scored 1–10 and equally weighted to produce overall rankings; those categories are Public Persuasion, Crisis Leadership, Economic Management, Moral Authority, International Relations, Administrative Skills, Relations with Congress, Vision/Setting an Agenda, Pursued Equal Justice for All, and Performance Within the Context of His Times (C‑SPAN’s advisory team set this framework in 2000 and it has been used in subsequent surveys) [1] [2].
1. What the survey actually asks — ten consistent leadership qualities
C‑SPAN’s methodology instructs participating historians and professional observers to rate every president on ten leadership qualities using a scale of one (“not effective”) to ten (“very effective”); those ten categories are listed explicitly on C‑SPAN’s survey pages and methodology documents and have been employed across the 2000, 2009, 2017 and 2021 survey cycles [1] [2] [3].
2. Scoring mechanics and weighting — equal weight to each criterion
C‑SPAN averages participant scores within each category for each president and then gives each of the ten categories equal weighting when computing an overall score; participants’ individual responses are kept confidential and no formal definitions for each category are provided, allowing historians to interpret them for themselves [1] [2].
3. Who decides the criteria — an academic advisory team
The ten leadership qualities were recommended by a rotating team of academic advisers — including historians Douglas Brinkley, Edna Greene Medford and Richard Norton Smith — who “helped craft” the categories, suggested participants, and reviewed organization and analysis for each cycle [2] [1]. C‑SPAN’s 2021 materials note that the advisory team continues to guide the survey’s framework [4].
4. How often and to whom the criteria are applied
C‑SPAN has run the historians’ survey in 2000, 2009, 2017 and 2021 and invites historians and other professional observers drawn from C‑SPAN’s programming database and adviser recommendations to participate; the same ten criteria have been applied across these multi‑year cycles to produce time‑series comparisons [3] [2] [5].
5. Interpretation limits and deliberate ambiguity
C‑SPAN does not provide formal definitions of the ten categories, and the network’s advisers have said they “wish we had defined ‘greatness’” historically; that lack of strict definition is by design and means score changes can reflect shifts in historians’ interpretations rather than changes to the instrument itself [1] [6]. Available sources do not mention precise operational definitions for each term beyond the category labels [1].
6. What the categories privilege — leadership skills over ideology
By focusing on skills and performance (public persuasion, crisis leadership, administration, relations with Congress, vision, etc.), the survey emphasizes evaluative leadership traits rather than partisan ideology. C‑SPAN’s advisers and methodology center historians’ professional judgments rather than public opinion polling, producing a scholarly ranking rather than a popularity contest [2] [3].
7. Areas of debate and potential biases
Critics and commentators note subjectivity in “greatness” and worry that undefined categories let respondents emphasize different things across time; student and media write‑ups reflect that different historians may weight moral authority or “performance within context” very differently, producing shifts in rankings that stem from interpretation rather than raw events [6] [7]. C‑SPAN’s reliance on a selected panel of historians also means the results reflect that cohort’s judgments; C‑SPAN’s advisers supplement the participant pool but the composition of participants matters for outcomes [2] [5].
8. Evidence of consistency and change across criteria over time
C‑SPAN reports that results have been “fairly consistent” across cycles, but also highlights notable category shifts — for example, some presidents have seen large gains in particular criteria (C‑SPAN noted Ulysses S. Grant’s notable rises), and C‑SPAN has tracked which categories show the greatest movement over two decades, such as Pursued Equal Justice for All [4] [3].
9. How journalists and educators use the criteria
C‑SPAN’s Classroom materials and videos with historians (e.g., Douglas Brinkley) explain the categories and encourage students to analyze changes across surveys, showing the ten‑category framework is used for teaching about presidential leadership as well as producing a public ranking [8] [7].
10. Bottom line for users and researchers
If you’re using C‑SPAN survey results, treat the ten categories as a consistent rubric for historian judgment (each scored 1–10 and equally weighted) but also account for interpretive variance: the categories lack tight definitions, panel composition affects outcomes, and shifts can reflect changing scholarly priorities as much as reassessments of presidents’ deeds [1] [2] [6].