Keep Factually independent
Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.
Fact check: How have protest sizes been measured and estimated in the US since 2020?
Executive Summary
Crowd-size estimates in the US since 2020 come from a mix of manual observation, venue-based math, law enforcement and organizer reports, academic aggregators, and emerging AI/drone techniques — and those methods routinely produce divergent totals because they rely on different inputs and incentives. Recent work by the Crowd Counting Consortium and method-focused research shows that disagreements are technical as much as political: they reflect distinct data sources and trade-offs rather than a single correct number [1] [2] [3].
1. Why crowd numbers keep sparking fights — the anatomy of disputed totals
Estimating attendance is fundamentally an exercise in imperfect data and choice of method, so wide ranges are common rather than exceptional. Researchers point out that the same event can produce estimates ranging from tens of thousands to several times that number because observers use different baselines — aerial imagery versus ground-based samples, venue capacity versus instantaneous density — and apply different assumptions about how long people stay in place [2] [4]. Those methodological choices matter: for instance, a density assumption of 1.5 people per square meter versus a lower figure changes totals dramatically, a fact police in other countries explicitly acknowledge when they supply counts to journalists [5]. The result is multiple defensible but incompatible numbers, which become tools in political narratives when organizers or opponents emphasize the figure that best serves their story [6] [3].
2. Three families of counting methods — strengths, limits, and recent innovations
Practitioners categorize approaches into manual visual estimation, computational analysis of imagery, and sensor-based or venue-based arithmetic, and each family has clear trade-offs. Manual or journalist-led counts rely on human judgment and context but are prone to bias and limited field of view; computer vision and AI applied to drone or satellite imagery offer higher spatial coverage and reproducibility but depend on image quality and trained models [2] [7]. Venue- and capacity-based methods — multiplying surface area by an assumed density — are mathematically simple and transparent, appealing to researchers who stress that crowd counting is “basic math,” but they require defensible density choices and fail on nonuniform gatherings [4] [5]. New AI tools demonstrated in international cases reduced some overestimates by automating counts from high-resolution imagery, showing how technology can shift debates when deployed rigorously [7].
3. Who counts — institutions, incentives, and why that matters to the reported number
Different actors produce and circulate totals for different reasons, and those motives shape which method is used and which numbers get amplified. Protest organizers commonly provide high-end self-reports to highlight mobilization; law enforcement and fire marshals offer operational estimates suited to logistics and safety, sometimes explicitly noting subjectivity; academic or independent consortia triangulate multiple inputs to produce standardized records and often discard partisan self-reports for credibility [3] [5] [1]. The Crowd Counting Consortium’s approach — combining venue capacity, official records, and media reporting while excluding suspect campaign figures — illustrates a governance choice that privileges comparability and skepticism of self-reports [3] [1]. Recognizing those incentives is essential: a number is not neutral metadata but an output shaped by who counted and why [6].
4. What recent US-focused evidence tells us about protest activity and counting since 2020
Empirical compilations show a surge in both the frequency and scrutiny of protests, and the data landscape grew richer and more contested after 2020. Aggregators like the Crowd Counting Consortium recorded thousands of demonstrations tied to single issues in 2023–24, reflecting an intense bout of mobilization and the need for systematic measurement [1]. Simultaneously, method-focused studies and reporting have spotlighted recurring mismatches between conventional tallies and algorithmic counts, and international demonstrations of AI-driven tools underscore that improvements in imagery and models can materially revise accepted totals [2] [7]. These patterns mean that counts issued in the immediate aftermath of events often get revised as better imagery or reconciled sources become available, so initial figures should be treated as provisional rather than definitive [2] [3].
5. How to interpret competing estimates — a practical reading sheet for journalists and citizens
When faced with divergent totals, prioritize transparency about method: numbers grounded in named inputs (surface area, assumed density, timestamped imagery) and clear exclusions are more informative than single-word claims. Prefer triangulated counts from independent aggregators or academic projects that document data provenance and discard self-interested reports, while noting that even those choices reflect normative decisions about reliability [3] [1]. Treat rapid, very large claims — including record-breaking single-day totals — with caution until corroborated by image-based counts or third-party aggregation, because both technological and political dynamics can produce large initial errors that later narrow [8] [7]. Understanding crowd-size disputes requires reading numbers as hypotheses backed by particular evidence streams, not as absolute truths.