What data exist on use‑of‑force incidents by ICE and Border Patrol across administrations?
Executive summary
Available use-of-force-by-ice-and-border-patrol">data on use-of-force by Immigration and Customs Enforcement (ICE) and Border Patrol (a component of Customs and Border Protection, CBP) are fragmented: agencies publish some statistics and policies but watchdogs, journalists and GAO audits show gaps, undercounts and inconsistent definitions that make cross-administration comparisons difficult [1] [2] [3]. Independent news organizations and project databases have compiled incident lists and patterns—especially for shootings and vehicle-related force—but they rely on public reporting, FOIA records and agency disclosures that are incomplete [4] [5] [6] [7].
1. The official data that exist: dashboards, policy documents and agency reporting
CBP maintains a public “Assault and Use of Force” dashboard that reports incidents and counts uses of force as discrete incidents rather than per-person actions, and notes that data can change pending review [1]. ICE posts enforcement FAQs and asserts its officers use “reasonable and necessary force” when someone resists arrest, but does not publish a comprehensive public use-of-force dataset comparable to CBP’s dashboard [8] [2]. DHS and component agencies have issued use-of-force policies over time—compiled by legal analysts—to show how rules evolved, but some agency-level policies (including ICE’s) are not fully public, according to the GAO [9] [2].
2. Independent compilations and media tracking fill gaps but are uneven
Investigative outlets and projects have assembled incident-level lists: NBC and The Marshall Project documented multiple fatal and nonfatal shootings by ICE/Border Patrol in recent periods and counted at least several deadly incidents in a matter of months [4] [5]. The Trace and ProPublica have documented broader patterns—such as increases in shootings and vehicle-related force and hundreds of reported instances like agents breaking vehicle windows—based on news reports, court filings and public records [3] [6]. Local outlets and aggregators (e.g., MS NOW, ICE List Wiki) have tracked clusters of shootings and personnel data that agencies did not centrally disclose, but these compilations depend on media coverage, public records and crowd-sourced entries [7] [10].
3. What audits and watchdogs say about data quality and systemic undercounting
A Government Accountability Office review found that DHS undercounted use-of-force incidents across its components and had no plan to analyze use-of-force data comprehensively, highlighting structural weaknesses for cross-agency or cross-administration comparisons [3] [2]. Journalistic reconstructions have found discrepancies between DHS accounts and available footage or documents in specific incidents, suggesting official tallies can omit relevant contextual detail even when a use-of-force event is reported [11].
4. Patterns, contested narratives and alternative interpretations
Reporting shows clusters of vehicle-related shootings and agents firing at drivers in multiple incidents, with at least 15 such shooting incidents tracked by one outlet over a recent span, and ProPublica noting a spike in agents breaking car windows compared with a prior decade—evidence of increased force in certain operations though not a complete census [7] [6]. DHS and agency spokespeople typically frame individual shootings as defensive actions where officers feared for their lives, a position reflected in agency statements and some subsequent federal reviews [4]. Independent reporting and local officials sometimes dispute those accounts after reviewing video or documents, creating competing narratives about necessity and restraint [5] [11].
5. Cross‑administration comparisons: possible signals, not airtight trends
Media and watchdog analyses suggest escalations in certain forms of force during aggressive enforcement directives, for example an uptick in vehicle-break incidents and interior operations tied to specific administration policies, but definitive attribution across administrations is hampered because datasets use different definitions, some agency policies aren’t public, and GAO found systemic undercounting [6] [9] [2]. Where journalists have compared eras, they rely on incident compilations and policy changes rather than a single authoritative longitudinal dataset—so observed increases are plausible and documented in reporting, but cannot be quantified with complete confidence from publicly available official sources [3] [5].
6. Implications and reporting limitations
The existing evidence base mixes partial official dashboards, agency policy statements, GAO findings and independent incident trackers; together they indicate real concerns about underreporting, inconsistent classification and localized spikes in force—especially shootings and vehicle-related tactics—but there is no single, transparent, agency-wide dataset that lets researchers definitively measure use-of-force trends across administrations [1] [2] [3]. Reporting limitations include redacted or unavailable agency policies, reliance on media and FOIA for incident lists, and disputes between DHS accounts and independent evidence in high-profile cases, all of which counsel caution in drawing categorical conclusions [11] [10].