What protocols does the Israeli military follow to verify targets in Gaza?

Checked on January 3, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

This fact-check may be outdated. Consider refreshing it to get the most current information.

Executive summary

The Israeli military combines traditional human vetting with new digital tools—credit-card‑style “target banks,” phone‑tracking, and AI decision‑support systems such as “Lavender,” “The Gospel,” and “Where’s Daddy?”—but reporting shows those systems were increasingly relied upon after October 7, 2023 while customary safeguards were relaxed, raising legal and accuracy concerns [1] [2] [3]. Human Rights Watch and investigative outlets describe a mix of automated scoring, mass surveillance inputs, and abbreviated human review; the IDF publicly rejects claims that AI autonomously selects targets, asserting it operates within legal protocols [4] [5].

1. The stated layers of verification: target banks, human approvers and legal review

Historically the IDF used a “target bank” model—compiling researched dossiers on suspected militants and locations, then routing strikes through chains of approval that included intelligence validation and legal review designed to assess distinction and proportionality before attack [3] [6]. Multiple sources say that these files once documented expected civilian presence and collateral risk as part of a deliberate vetting process that informed weapon choice and timing [1] [7]. The IDF maintains it follows international law and that targeting decisions are reviewed within an institutional framework [5].

2. New digital tools and their role in verification: Lavender, The Gospel, Where’s Daddy?

Reporting and NGO analyses identify four digital tools used in Gaza: an evacuation-monitoring tool based on phone tracking, The Gospel (structures list), Lavender (individual ratings of suspected affiliation), and Where’s Daddy? (location-timing indications to strike a person at home) — all intended to feed the targeting process by generating candidates or timing windows for operations [2] [8] [1]. Investigations found Lavender scores residents using social, telemetry and visual data to rank likelihood of membership in armed groups, and Where’s Daddy? purportedly signals when a specific person is at a location, thereby informing strike timing [8] [2].

3. Scale, automation and truncated human oversight after October 7

Multiple investigative reports and internal-sources reporting indicate that in the immediate aftermath of October 7, orders broadened who could be targeted, delegated more authority to mid-level officers, and led to greater reliance on automated outputs with reduced time for classic verification—including curtailing “roof knocks” and other warning practices—so that tens of thousands of names generated by AI were treated operationally as vetted targets in practice [3] [1] [7]. Former and current officers quoted by journalists describe a permissive operational posture and a willingness to accept higher error margins under wartime pressure [8] [3].

4. Accuracy, legal risks, and independent critiques of verification protocols

Human Rights Watch and other analysts warn the tools are fed by mass surveillance and imperfect data (damaged networks, phone-sharing, spoofed signals), producing risky approximations that cannot reliably distinguish combatants from civilians and therefore may violate principles of distinction and precaution; HRW specifically concluded the tools “may be increasing the risk of civilian harm” [4] [2]. Investigations allege minimal human checks in some cases—for example a limited verification step of gender in certain automated flags—raising concerns about over-reliance on algorithmic outputs [9] [1]. Academic and NGO observers say high confidence scores can create dangerous over‑trust by commanders [10].

5. Official denials, competing narratives and institutional incentives

The IDF publicly rejects assertions that AI autonomously selects targets and insists targeting follows legal rules and human oversight, framing digital tools as decision‑support rather than decision‑makers [5]. Yet internal reporting and leaks suggest institutional incentives—desire for speed, scale, and political pressure for results—pushed practices toward automation and broadened definitions of “military target,” an implicit agenda that critics say erodes protections for civilians [3] [11]. Independent verification is limited by access constraints in Gaza, meaning many claims about exact protocols and error rates rely on media investigations, NGO fieldwork, and testimony from former or internal sources rather than open IDF documentation [3] [2].

Conclusion

The IDF’s formal protocol remains a layered process of intelligence compilation, human approval and legal assessment, but reporting from investigative outlets and rights groups documents a significant shift toward digital triage—AI‑driven ranking and phone‑based location tools—that shortened traditional verification, delegated authority downward, and increased risk of civilian harm; the IDF disputes autonomous targeting claims, and independent auditing remains constrained by access limitations [3] [2] [5].

Want to dive deeper?
How does the Lavendar AI system determine affiliation scores and what data inputs does it use?
What international law standards govern target verification and how have legal bodies evaluated IDF practices in Gaza?
What independent evidence exists about the accuracy and civilian casualty rates linked to AI-assisted strikes in Gaza?