Keep Factually independent

Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.

Loading...Goal: 1,000 supporters
Loading...

How do independent databases (e.g., Mapping Police Violence, The Guardian) compare to FBI and WaPo on racial patterns?

Checked on November 19, 2025
Disclaimer: Factually can make mistakes. Please verify important info or breaking news. Learn more.

Executive summary

Independent databases such as Mapping Police Violence (MPV) and projects compiled by newsrooms (The Washington Post) and nonprofits generally record more incidents and broader types of force than FBI records, and several studies using MPV-like datasets report large Black–white disparities (e.g., Black people ~3.2× more likely to be killed by police in some metro analyses) — while federal counts are known to undercount by a large margin because reporting to the FBI is voluntary (under‑reporting “more than half” in some studies) [1] [2] [3] [4]. Coverage is uneven across projects — MPV includes non‑shooting lethal encounters and off‑duty incidents, WaPo focuses on on‑duty fatal shootings, and the FBI’s Uniform Crime Reporting system relies on voluntary submissions and misses many cases [1] [5] [4].

1. Why independent projects exist: gaps in federal data

Researchers and journalists created independent databases because federal systems miss many killings and lack consistent force data; the FBI’s UCR relies on voluntary local reporting and has documented undercounts, prompting newsrooms and academics to assemble their own records from media, public records and local sources [4] [3] [1]. The result: non‑government datasets attempt to fill a transparency vacuum and capture incidents the federal system omits [1] [4].

2. Differences in definitions drive different tallies

Mapping Police Violence aims to count all police killings, including chokeholds, taser deaths, baton strikes and off‑duty incidents; The Washington Post’s Fatal Force project, by contrast, records people fatally shot by on‑duty officers — a narrower class of incidents — so MPV will usually report higher totals when non‑shooting deaths are included [1] [5]. The FBI’s categories and voluntary submission process further complicate comparisons because not all departments report and racial/ethnic coding differs [4] [6].

3. How racial pattern estimates vary by dataset

Studies using MPV and similar compilations often find substantial Black–white disparities — for example, one metropolitan analysis cited Black people as roughly 3.23 times more likely to be killed than white people — while some academic work that uses other benchmarks or incomplete federal data reaches more mixed conclusions [2] [7]. The Washington Post and MPV both report Black Americans are disproportionately affected in their tracked categories, but the magnitude and statistical significance can shift depending on whether nonfatal incidents, different benchmarks (population vs. crime exposure), or Hispanic coding conventions are used [8] [6] [2].

4. Measurement choices and benchmarks matter

Analysts disagree on the right denominator: general population, arrests, stops, or violent‑offender benchmarks each produce different disparity estimates. Some peer‑reviewed work using MPV or Fatal Encounters finds clear racial inequities; other studies that control for exposure to police contact or use alternative benchmarks sometimes find smaller or non‑significant disparities — showing that methodological choices shape conclusions [2] [7] [6].

5. Practical consequences: more incidents, different stories

Because MPV captures non‑shooting lethal force, it surfaces patterns and case types the WaPo shooting database omits (e.g., deaths from restraints or tasers), which can change policy emphasis (use‑of‑force restrictions, restraint bans) and public perception of racial patterns when those cases disproportionately involve particular groups [1] [9]. Conversely, WaPo’s tightly curated fatal‑shooting dataset is reproducible and widely used for time‑series tracking of shootings specifically [5] [10].

6. Critiques, errors and the need for triangulation

Independent and media databases are not immune to errors: critics have flagged factual mistakes or context omissions in high‑profile datasets, and researchers note that any single dataset may omit incidents — hence scholars recommend triangulating multiple sources and acknowledging limitations rather than treating one catalog as definitive [11] [2] [1].

7. Bottom line for readers and policymakers

Available reporting shows independent databases (MPV and nonprofit mappings) generally count more incidents and broader types of lethal force than the FBI and sometimes than WaPo, and analyses using those datasets often find pronounced Black–white disparities (e.g., ~3.2× in some metro studies). However, estimates vary based on inclusion rules and benchmark choices, so assessing racial patterns requires comparing multiple datasets and being explicit about definitions and denominators [1] [2] [3] [4].

Limitations: available sources do not mention a single unified “truth” dataset; instead they document disagreement among datasets, under‑reporting by federal systems, and methodological debates that drive different conclusions [3] [4] [2].

Want to dive deeper?
How do methodologies differ between independent databases and the FBI/WaPo when tracking police killings?
What racial disparities in police violence emerge when comparing Mapping Police Violence, The Guardian, the FBI, and WaPo datasets?
How have reporting gaps and undercounts by the FBI affected racial pattern analysis over time?
Which dataset best accounts for race and multiracial identities, and how does that affect conclusions about racial patterns?
How do trends since 2013 (or the last decade) differ across databases in rates of police killings by race and location?