What is the verified video and evidence about Alex Pretti’s shooting and how do they compare to the AI image circulation?

Checked on January 31, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

Verified multi-angle eyewitness videos and media analyses show Alex Pretti falling to the ground next to a federal agent holding a handgun at the time he was shot, and those authentic frames have been repeatedly manipulated with AI to produce misleading, higher‑resolution stills; forensic checks find the AI versions include visual errors (for example, a headless agent) and exaggerate contested details such as an apparent weapon in Pretti’s hand [1] [2] [3] [4]. Reporting also documents earlier confrontations captured on video, and officials’ accounts have conflicted with what the civilian footage appears to show, creating a gap that AI‑altered imagery has exploited politically and virally [5] [6] [7].

1. What the verified video evidence actually shows

Multiple outlets that verified and analyzed bystander footage — including frame‑by‑frame review by The New York Times and video verification teams cited by BBC and Reuters — show Pretti already on the ground and falling next to an agent who is holding a handgun when the fatal shots occur, and footage from different angles undercuts the federal account in key respects [5] [1] [3]. Investigative reporting and media verification note that the object in Pretti’s right hand in the verified videos is a phone, not a gun, based on close analysis of the clips circulating from the scene [3] [8].

2. Prior incidents and context captured on video

Newly surfaced videos from January 13 depict a tense earlier encounter in which a man identified by family and outlets as Pretti appears to confront and at times physically engage with federal officers — spitting, shouting and kicking a vehicle — giving prosecutors and commentators material used by some to argue about Pretti’s state of mind and behavior leading up to the fatal encounter [6] [9] [10]. Reporters caution, however, that what preceded later moments on the day of the shooting is not fully clear from these clips alone [6].

3. How AI‑enhanced images were produced and identified

Researchers and fact‑checkers say many viral “high‑resolution” images are synthetic enhancements produced by users applying AI models to a single low‑resolution frame from verified video; reverse image searches and forensic markers — including anatomically impossible artifacts such as a kneeling agent rendered without a head — flagged the manipulations [4] [3] [2]. BBC Verify and AFP trace most of the altered pictures to the same original frame and warn that AI “enhancements” are predictive reconstructions, not new factual evidence [2] [4].

4. Key differences between the verified footage and AI images

The verified footage consistently shows Pretti holding what analysts identify as a phone, whereas several AI‑enhanced stills and social posts portrayed an apparent weapon in his hand — a transformation not supported by the original clips [3] [8]. Fact‑checkers also note visual inconsistencies in the doctored images (such as a missing agent head) that betray synthetic generation and undermine the claim that the AI frames add factual clarity [3] [4].

5. Political and misinformation dynamics around the images

Altered images and selective video clips have been weaponized across social and political channels: they have been shared to criticize federal agents’ use of force and to support opposing official narratives, and commentators warn that high‑profile political actors have amplified AI imagery, intensifying public confusion and eroding trust in evidence amid competing official statements [11] [1]. News outlets report that the federal government and local authorities provide conflicting accounts, and independent footage has been central to public disputes over what happened [7] [5].

6. What remains unresolved and where to look next

Authorities have said body‑worn camera footage exists and will be reviewed, and that material — if released or independently verified — could narrow remaining factual gaps, but current reporting emphasizes that public understanding relies on authenticated bystander video and cautious interpretation given AI‑generated distortions [7] [1]. Reporting limitations: the sources here do not provide a complete forensic chain of custody for all footage, nor do they offer final investigative conclusions, so questions about intent, exact sequence and official accountability remain subject to formal review and legal processes [5] [7].

Want to dive deeper?
What do the DHS body‑worn camera policies say about release of footage in federal agent shootings?
Which forensic techniques reliably distinguish AI‑generated enhancements from original video frames?
How have AI‑altered images influenced public opinion and legal proceedings in past high‑profile police shooting cases?