Keep Factually independent

Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.

Loading...Time left: ...
Loading...Goal: $500

Fact check: What are the key differences between the Election Truth Alliance's audit methodology and state-led audits?

Checked on October 17, 2025

Executive Summary

The analyses provided show a consistent contrast: the Election Truth Alliance (ETA) describes a manual, percentage-based precinct and ballot audit approach, while state-led audits — exemplified by Washington and Maryland practices — emphasize randomized, statistically grounded methods such as risk-limiting audits and automated tabulation reviews. The material supplied contains limited direct detail on ETA beyond its sampling thresholds, so comparisons rely on state audit descriptions and third-party reporting about election-denial influence for context [1] [2] [3] [4].

1. A Clash Over Sampling: Manual Percentage Checks Versus Statistical Randomness

The analyses indicate ETA’s method centers on a manual audit of at least 2% of election-day precincts and 1% of statewide totals for early, mail-in, and provisional ballots, which frames the audit as a fixed-coverage exercise focused on manually counting pre-specified proportions of ballots rather than testing risk thresholds. State audit descriptions contrast sharply: Washington’s approach uses random batch and random ballot audits—labelled risk-limiting audits—with a 5% risk limit for statewide and county audits, meaning sample sizes and selection are driven by statistical guarantees about the likelihood of correcting a wrong outcome, not fixed percentage coverage [1] [2].

2. Technology and Automation: Independent Automated Tabulation Versus Hands-On Counting

Maryland’s 2016 post-election practice, described as an independent, automated tabulation audit, emphasizes maximizing technology to reduce human error and to present audit results transparently, with a reported discrepancy variance level of 0.5%, suggesting tight tolerances and reliance on machine reconciliation. In contrast, ETA’s stated manual-count emphasis implies greater human involvement and different error profiles; manual audits can detect certain physical-chain issues but are more susceptible to human error and procedural variance unless tightly standardized and observed [3] [1].

3. Timelines and Procedural Bounds: Deadlines and Minimums Versus Open Manual Reviews

One state example specifies procedural constraints: Maryland required audits of minimum sample sizes—at least 2% of election-day precincts and 1% of mail-in/provisional ballots with a minimum of 15 ballots per category—and completion within 120 days of the election. ETA’s description in the material notes percentage thresholds but does not detail timing, minimum sample mechanics, observer rules, chain-of-custody procedures, or completion windows, leaving a gap about how ETA would handle logistical safeguards states treat as routine [1] [3].

4. Outcome Certainty: Risk Limits and Statistical Confidence Versus Fixed-Rate Inspection

Risk-limiting audits used by states like Washington set a quantified probability (the risk limit) that a wrong electoral outcome would not be corrected by the audit—Washington’s stated 5% risk limit ties the audit directly to outcome confidence. ETA’s fixed-percentage manual sampling does not inherently include a stated risk limit or statistical framework in the provided materials; therefore its capacity to provide equivalent confidence in the correctness of an election outcome is not demonstrated within the sources, creating a key methodological divergence [2] [1].

5. Transparency and Perception: Automation Provides Reproducibility; Manual Audits Offer Visible Counting

Maryland’s automated tabulation audit is presented as maximizing transparency through reproducible machine recounts and standardized reporting with a 0.5% discrepancy variance threshold, which appeals to reproducibility and comparability. ETA’s manual approach may be perceived as more immediate and visible to lay observers, but without documented standards in the provided analyses—such as chain-of-custody protocols, observer access rules, and audit documentation—manual visibility does not automatically translate to systemic transparency or defensible reproducibility [3] [1].

6. Political Context: Election Denialism’s Growing Role and the Risk of Partisan Audits

Investigative reporting documents the rise of election-denialist actors into local election roles, with 67 conspiracy theorists identified in key posts across six states and 22 actively influencing election administration. That context is relevant because audit methodology choices become contentious when implemented by actors with explicit agendas; fixed-percentage manual audits championed outside established state frameworks risk being used for political validation rather than objective verification, a dynamic the sources warn about even though ETA itself is not tied to these individuals in the provided material [4].

7. Gaps and Areas That Require Documentation Before Full Comparison

The materials clearly list ETA’s sampling thresholds but lack critical procedural details: statistical rationale, risk-limits, chain-of-custody, observer rules, timeframes, and dispute-resolution mechanisms. State methods, by contrast, often document these elements—e.g., risk limits and automated tabulation tolerances. This absence means a full, apples-to-apples evaluation cannot be completed from the supplied analyses; the most that can be said is that ETA’s approach is procedurally different in sampling philosophy and under-documented in safeguards compared with the state practices cited [1] [2] [3].

8. What To Watch Next: Documentation, Standards, and Independent Oversight

Given the differences and the political context, the pragmatic metric for assessing ETA versus state audits will be whether ETA can produce detailed protocols matching the procedural and statistical rigor states provide: documented risk calculations, chain-of-custody, minimum sample rules, timelines, and independent observers. Without those, manual percentage audits risk being less defensible in legal or public-confidence arenas, especially where reporting notes the infiltration of election-denialist actors into official roles—underscoring the need for independent oversight and published standards before such methods are adopted or relied upon [1] [2] [4].

Want to dive deeper?
What are the core principles of the Election Truth Alliance's audit methodology?
How do state-led audits ensure the integrity of election results?
What are the key similarities between the Election Truth Alliance's audit methodology and state-led audits?
Can the Election Truth Alliance's audit methodology be used in conjunction with state-led audits?
What are the potential drawbacks of relying solely on the Election Truth Alliance's audit methodology?