What public forensic audits exist for U.S. voting machines used in the 2020 election and what did they find?
Executive summary
Public, post-2020 “forensic” examinations of voting machines fall into two broad categories: routine, state-mandated post‑election audits (risk‑limiting, hand‑to‑eye, logic-and-accuracy) that found machine counts accurate, and high-profile, sometimes partisan forensic probes whose scope, methods, and findings were mixed but generally produced no evidence of vote‑switching that would overturn results [1] [2] [3] [4].
1. Routine state audits and recounts confirmed machine accuracy
States and counties performed legally required post‑election audits in 2020—risk‑limiting audits and bipartisan hand counts—that compared paper ballots to machine tallies and, where applied, statewide recounts re‑ran ballots through tabulators and hand‑checked samples; those processes confirmed the original counts and led election officials to certify results (North Carolina example) [5] [1] [2]. North Carolina’s State Board and 100 county boards conducted post‑election audits and a statewide recount in a close judicial contest, with the tabulator re‑run and partial hand recount confirming the certified winner [5] [1]. Pennsylvania’s statewide risk‑limiting audit examined tens of thousands of randomly selected ballots and likewise found the presidential outcome was correctly reported [2] [6].
2. Maricopa County’s multi‑layer forensic reviews: independent labs found no vote switching
Maricopa County (Arizona) subjected its Dominion tabulation equipment to multiple forensic examinations by two U.S. Elections Assistance Commission‑certified Voting System Testing Laboratories and a certified public accounting firm; the labs produced bit‑by‑bit clones of devices, scanned event and audit logs with multiple malware tools, and concluded the tabulation systems were secure and that equipment tabulated and adjudicated ballots accurately—reporting no evidence of vote‑switching or tampering [3] [4] [7]. Those audits were paired with hand recounts and other checks that did not change the outcome; even some later, partisan audits (e.g., Cyber Ninjas) ultimately produced tallies consistent with the official count, though their methods and governance were widely criticized [3] [8] [9].
3. State‑led “forensic” initiatives that were more investigative than technical
Several states announced forensic probes that mixed routine machine/cybersecurity testing with wider document or voter‑eligibility reviews. Texas’s announced “forensic audit” described phases including accuracy testing of machines and cybersecurity checks—activities that state law already required in counties—and officials framed the process as addressing voter confidence; reporting around these efforts also stressed that prior audits and legal cases had found no evidence of widespread fraud [10] [11]. Pennsylvania saw legislative pressure for a forensic investigation and individual county probes that led to scrutiny and, in one case, decertification of a single county’s equipment after unauthorized third‑party access; however, the statewide risk‑limiting and county audits had not found widespread fraud [6] [12].
4. Findings, patterns, and the limits of public forensic audits
Across the publicly documented forensic and audit efforts in 2020 and 2021, independent and state‑administered reviews repeatedly found machine counts accurate, detected no systemic vote‑switching, and produced no evidence sufficient to change certified outcomes [1] [3] [4] [2]. That conclusion is consistent across EAC‑certified lab reports, risk‑limiting audits, recounts, and many county audits [3] [2] [1]. Nevertheless, some partisan or ad hoc audits—especially those run without standard accreditation or transparent methods—drew criticism for process flaws, nonstandard procedures, or political motivations even when their eventual tallies aligned with official results [9] [8].
5. What public records show and what they do not
Publicly available audit reports and official statements document procedures (bit‑for‑bit clones, malware scans, tabulator retabs, hand counts) and consistent findings of accuracy in reviewed jurisdictions, but the record is uneven: some high‑profile requests involved third parties or legislative subpoenas that raised legal and procedural concerns, and not every demand for “random opening” of machines or third‑party inspection was accepted by election officials citing security guidance from federal partners and critical‑infrastructure protections [1] [5] [12]. Reporting assembled here reflects those public documents; where assertions extend beyond the sourced material, this account notes the limitation rather than speculating.