What evidence do independent fact‑checkers cite to show the photo of Renee Good was doctored?

Checked on January 13, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

Independent fact‑checkers point to three converging lines of evidence to show the widely circulated photo claiming to depict Renee Nicole Good was doctored or misattributed: reverse‑image searches that trace the image to other people and older posts, admissions and provenance showing the image was AI‑generated, and visual inconsistencies between the viral image and multiple on‑scene videos and authentic university material [1] [2] [3].

1. Reverse‑image searches link the viral picture to other people and older university posts

Fact‑checking organizations used reverse‑image tools and found that an image circulated as Good was posted in 2020 by Old Dominion University’s English department and identified a different woman — Gabriela Szczepankiewicz — in a poetry‑contest post, establishing that the viral photo was not an original image of Renee Good [1] [4] [3].

2. Some viral images were AI‑generated and trace back to a user who acknowledged that fact

Fact‑checkers traced at least one graphic depiction — an image purporting to show Good’s car aimed at an ICE officer — to a post on X by user @ScummyMummy511, who acknowledged using artificial intelligence to create the image; Snopes and other outlets flagged that admission and the image’s AI provenance as proof it was not authentic evidence [2] [5] [6].

3. Visual inconsistencies with video evidence undermine the image’s claim to depict the shooting

Independent reviewers compared the viral images to multiple on‑scene videos and photographs and reported that the AI‑generated renderings and misattributed photos did not match the actual sequence and framing captured in credible video, making the viral depictions inconsistent with verified visual records of the incident [2] [3].

4. Multiple reputable fact‑check outlets reached the same conclusions

PolitiFact, Snopes and Reuters (summarized in AP/WRAL reporting) all reported that images circulating online either showed a different woman from the university or were fabricated, and they published the same reverse‑image and provenance findings that debunked the specific photographs and claims [4] [6] [7] [3].

5. Confusion between authentic and manipulated images magnified the misinformation risk

While some legitimate photos of Renee Good (or images associated with her university record) did circulate and were correctly identified by outlets, fact‑checkers warned the public that authentic university posts and unrelated portraits were being conflated with AI‑generated or misattributed images — a pattern that amplified false narratives about Good online [1] [8] [9].

6. How fact‑checkers corroborated provenance and user admissions

Investigators combined reverse‑image traces to the Old Dominion Facebook post, cross‑checked university statements and prize lists, and recorded public acknowledgments by social‑media users who created AI renditions; that chain of provenance — original university posts showing different people plus admissions of AI generation — formed the central evidentiary basis for the debunking [1] [2] [5].

7. Limits of the reporting and remaining uncertainties

Reporting by independent fact‑checkers does not claim access to original camera files or forensic metadata for every viral image, and in at least one case outlets acknowledge authentic photos of Good exist in university material; fact‑checkers are careful to distinguish which specific viral images are misattributed or AI‑created rather than asserting all images online are false [8] [2] [1].

8. Why this matters: motive, spread and the role of AI

Fact‑checkers emphasize that the rapid spread of doctored and misattributed images — sometimes created or amplified by users seeking to shape the narrative about the shooting — distorts public understanding, and that admissions of AI use combined with easy image‑reuse create a fast‑moving misinformation risk that has real consequences for victims, witnesses and public debate [2] [3].

Want to dive deeper?
What methods do fact‑checkers use to trace AI‑generated images to their creators?
Which verified photos of Renee Nicole Good have been authenticated by news organizations and universities?
How have social platforms responded to admitted AI‑generated posts related to the Minneapolis ICE shooting?