How do Flock’s software sharing settings (PD‑to‑PD, radius, statewide) work in practice and what audit controls exist for cross‑jurisdictional searches?
Executive summary
Flock’s sharing model lets local agencies choose between one‑to‑one (PD‑to‑PD), geographic radius, statewide, or nationwide lookup connections, with customers retaining technical control over who can query their cameras and for how long; the company also says it provides audit reports, search filters, and new options like case‑number requirements and federal user labeling to support oversight [1] [2] [3]. In practice, investigative reporting and advocacy audits show those controls exist on paper but can be porous in operation: audits reveal extensive cross‑jurisdictional searches, ambiguity in entered reasons, and disputed completeness or integrity of logs that limit meaningful accountability [4] [5] [6].
1. How the sharing settings are supposed to work: PD‑to‑PD, radius, statewide, nationwide
Flock describes a customer‑controlled architecture where agencies explicitly configure sharing relationships — sharing “networks” 1:1 with another department, opening data to agencies within a set geographic radius (for example 10 miles), or enabling statewide or nationwide lookups — and Flock says those configurations are set and enforced by the owning agency, not centrally by the company [1] [2]. Flock also states federal agencies will be marked as a distinct “Federal” user category and, per recent commitments, federal users will not be automatically added to statewide or nationwide lookup to give locals clearer control [7] [3]. Municipal policies vary: some jurisdictions log and approve sharing requests through supervisors and claim strict, internal controls [8].
2. What auditing and logging Flock provides — intent versus limits
Flock asserts every search requires a stated reason and is recorded in audit logs visible in an agency’s “network audit,” designed for command staff, elected officials, and community oversight; the company offers audit reports and is rolling out features like required case numbers and proactive AI alerts to flag unusual activity [2] [1] [3]. Critics and public records obtained by advocates, however, show audit entries can be vague (e.g., “investigation”), massive in scale (hundreds of thousands of nationwide queries in 30 days), and potentially insufficient to detect misuse because logs capture inputs but not the contextual legitimacy of a search [4] [6].
3. Real‑world behavior: cross‑jurisdictional searches and “side‑door” access
Multiple investigations document Border Patrol and other federal searches appearing in local Flock audits and reveal one‑to‑one sharing relationships that effectively opened local cameras to federal use in some states, even when local policy purported to restrict such access — a pattern labeled “side‑door” searches by researchers in Washington state and others [5] [9]. Flock’s public posture is that federal access must be explicitly granted by local customers and filters can block prohibited queries, but auditing and public‑records evidence indicate local configurations, human workarounds, and interpretation gaps have allowed cross‑jurisdictional queries linked to sensitive enforcement areas like immigration and reproductive care [1] [4] [10].
4. Technical safeguards and their weaknesses
Technical protections in Flock’s product include query filtering (blocking searches tied to prohibited topics in jurisdictions that require it), encryption and CJIS‑segregated storage for certain data, and forthcoming proactive audit alerts and case‑number enforcement options [1] [11] [3]. Yet experts and watchdogs warn filters are circumstantial — officers can enter vague reasons or reuse case numbers — and AI alerts detect anomalies after the fact, relying on local agencies to investigate and act, which erodes the preventive promise of these mechanisms [6] [10].
5. Accountability gaps, competing narratives, and the politics of surveillance
Flock and partnering agencies emphasize local ownership, transparency, and training as the governance model, framing product updates as community‑driven safeguards [2] [8]. Privacy advocates, journalist investigations, and state audits contend those same local controls have failed to prevent sensitive cross‑jurisdictional uses and that audit logs can be incomplete, redacted, or difficult to interpret — leading to calls for region‑locked settings, independent audits, and legislative limits on interstate queries [12] [4] [13]. The tension reflects an implicit agenda clash: vendors and some police departments promote operational utility and interagency cooperation, while advocates highlight civil liberties risks when that cooperation crosses legal or normative boundaries [14] [6].