Keep Factually independent
Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.
Fact check: How are protest attendance numbers typically calculated and verified?
Executive Summary
Protest attendance is commonly estimated using aerial photography combined with software analysis, a method documented in September 2025 counts that produced estimates around 41,800–42,400 people with an explicitly reported ±12% margin of error [1] [2] [3]. Independent monitors and news outlets reported nearly identical figures for different demonstrations in São Paulo and Rio de Janeiro, underscoring the method’s growing use and transparency about uncertainty; however, contemporaneous reporting also flags emerging challenges from AI image manipulation and alternative sensor-based models that complicate verification [1] [4] [5].
1. How the dominant method actually works — aerial images plus software that assigns people-per-square-meter
Researchers and monitors routinely use high-resolution aerial photographs and stitched mosaics, then apply software to segment the crowd and estimate density across mapped areas; this yields a total by multiplying area by local density and summing across the site, as in the University of São Paulo monitor’s September 2025 counts for São Paulo and Rio [1] [2]. The reports state a formal margin of error—12% in these cases—derived from validation exercises and model assumptions, which produces reported ranges (for example 37.3k–47.5k around a 42.379k point estimate) and signals that monitors are quantifying uncertainty rather than presenting single-point claims [2].
2. Why margins of error matter — transparency versus overconfidence in crowd figures
The monitors’ practice of publishing a ±12% margin of error illustrates an effort to be transparent about limits: it acknowledges variability from image angle, occlusions, and density heterogeneity across a demonstration site [1]. A margin of that size means a headline figure of about 42k could plausibly represent a range of several thousand people; this constrains how confidently organizers, police, or media can claim exact superiority. Reporting ranges helps prevent misinterpretation of close counts as definitive wins, but it also requires that consumers of the figures understand statistical uncertainty, which is uneven across outlets and audiences [2].
3. Multiple independent outlets converged on the same aerial-software approach in September 2025
Brazilian outlets and monitors published near-identical estimates for the same events in late September 2025 using the USP monitor approach, with O Globo and Poder360 both citing USP-derived counts around 42k for São Paulo and Copacabana respectively [2] [3]. This convergence shows both the method’s adoption by national monitors and media reliance on institutional counts rather than raw police or organizer numbers. Convergence increases confidence that the method was applied consistently, but it does not eliminate systemic biases that could affect all users of the same methodology, such as consistent undercounting in heavily occluded areas or overestimation where foliage or objects are misclassified as people [1] [3].
4. New technological threats: AI-generated crowds and image manipulation raise verification stakes
Independent reporting in October 2025 warned that AI can synthesize convincing crowd scenes, creating a new layer of risk for visual verification of attendance and for automated counting systems that rely on image inputs [4] [6]. When AI-generated imagery or altered footage circulates, counts based on single images risk being deceived; monitors therefore must triangulate with multiple timestamps, original sensor metadata, or alternate data streams. The reporting calls for industry-wide standards for labeling AI-generated content and for best practices in archival and provenance checks to preserve the evidentiary value of aerial sources [6].
5. Alternative and complementary methods — sensors and mathematical models provide cross-checks
Beyond aerial imagery, researchers have explored people-counting sensors and mathematical models to estimate crowding in constrained environments, demonstrating potential cross-application to protests and mass gatherings [5]. Sensor networks, turnstile-style counters at entry points, cell-tower aggregation, and transport ridership figures can triangulate aerial estimates and reduce dependence on a single data type. The airport-focused mathematical models discussed in November 2025 show how sensor fusion and model-based inference can provide tighter error bounds or at least independent validation, though each alternative also carries its own assumptions and privacy considerations [5].
6. What this means for journalists, organizers, and the public — interpret counts as model outputs, not absolute truths
The documented practice in late 2025 demonstrates that crowd figures should be treated as model outputs with stated uncertainties, not as absolute truths; reporting the methodology, date, and margin of error—as the USP monitor and multiple Brazilian outlets did—improves public understanding and accountability [1] [2]. Given the rise of AI-altered imagery, best practice now requires provenance checks, multi-method triangulation, and clear communication of uncertainty to avoid misleading narratives. Stakeholders should demand methodological transparency and cross-source validation when citing participation numbers to prevent manipulation and to correctly contextualize the political significance of turnout [4] [6].