Keep Factually independent
Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.
How do independent open-source investigators (Bellingcat, Oryx) validate or dispute both Russian and Western casualty figures?
Executive summary
Independent open‑source investigators validate or dispute casualty claims by assembling verifiable, time‑stamped digital evidence (photos, videos, social posts), geolocating and chronologizing incidents, and cross‑checking with on‑the‑ground sources and institutional tallies; Bellingcat reports hundreds to thousands of verified civilian‑harm incidents and contributes to crowd‑mapped databases, while Oryx documents visually confirmed equipment losses as a conservative, lower‑bound record [1] [2] [3]. Coverage is uneven: open‑source projects explicitly limit themselves to verifiable items and call their totals conservative, while states and international bodies often publish higher, modelled or partial tallies; available sources do not mention a single universal method that converts open‑source verification into definitive overall casualty counts [4] [5].
1. How open‑source teams build “court‑grade” evidence
Bellingcat and allied groups collect publicly available photos, videos, satellite imagery and metadata, then geolocate images, check timestamps, analyse weapon signatures and attack vectors, and log each incident into a structured database designed to be defensible in legal or accountability contexts — a process Bellingcat says was tested in a mock hearing and is intended for future prosecutions [1] [4] [6].
2. Conservative, verifiable vs. modeled totals: two different aims
Oryx’s tally of equipment losses is explicitly “visually confirmed” — each item on its lists is documented with imagery, so the totals are lower bounds rather than full attrition estimates [3] [7]. By contrast, some Western government estimates and academic aggregates use models, sampling and classified intelligence to produce higher casualty estimates; The Economist cited Oryx’s equipment counts while combining them with many other sources to estimate force impact [5].
3. Triangulation: geolocation, trajectory and forensics to attribute responsibility
Investigators use trajectory analysis, impact patterns and ordnance identification to attribute strikes to a particular weapon system or launch direction, then combine that with unit signatures and other open data to assign likely responsibility — an approach Bellingcat used to show cluster munition use and to identify missile origins and operators [8] [2].
4. Limits imposed by the “open” in open‑source
Open‑source methods require visible evidence; they cannot count casualties hidden by battlefield chaos, unreported deaths, or classified records. Bellingcat and similar projects therefore record incidents they can verify and explicitly warn that their recorded incidents do not equal the total death toll — these datasets are intentionally conservative and subject to undercounting [9] [4].
5. Crowd contributions, vetting, and error control
Crowdsourced maps like the Russia‑Ukraine Monitor Map harness many contributors but then require researcher vetting before inclusion; Bellingcat and the Centre for Information Resilience stress vetting, metadata checks and community moderation so that crowd tips become evidence only after verification [10] [11].
6. Cross‑checking with institutional tallies and human rights bodies
Open‑source investigators compare their incident logs to UN/OHCHR tallies, NGO reports and national lists; for example, Bellingcat contextualised its TimeMap entries alongside UN verified civilian casualty figures to highlight patterns while noting each source’s scope and limitations [12] [1]. International bodies often publish verified civilian casualties but note their own incompleteness, which investigators factor into interpretation [13] [14].
7. Disputes and counterclaims: how investigators rebut state narratives
When states deny incidents or offer alternate timelines, open‑source teams publish chronological, geolocated evidence — as Bellingcat did in Bucha — to directly contradict governmental claims by showing timestamps, repeated imagery and consistency across independent uploads [15]. These public packages are meant to invite independent scrutiny rather than be presented as incontrovertible final totals.
8. Complementary projects, AI and named‑lists for personnel losses
Independent media and research groups (e.g., Mediazona, IStories, BBC Russian projects) build named databases of personnel deaths using document scraping, local reporting and increasingly AI tools to accelerate matching and de‑duplication — useful for identifying individuals but still constrained by access, secrecy and verification limits [16] [17] [18].
9. What reporters and policymakers should take from the differences
Open‑source counts are trustworthy for documented, verifiable incidents and conservative lower bounds (equipment or named deaths visually confirmed), while modelled or classified estimates can produce much larger totals but rely on assumptions not visible to the public; a responsible judgment uses both: use open‑source documentation to test specific claims and modelled estimates to understand scale, always noting the methodological gaps each side leaves open [5] [4].
Limitations: available sources do not mention a single, unified methodology that turns open verification into complete casualty tallies; all cited projects warn their figures are incomplete and conservative [4] [3].