What is the HalfPastHuman project and what predictions has it made since 2010?

Checked on January 23, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

HalfPastHuman is the public face of Clif High’s predictive-linguistics effort—rooted in the Web Bot project—that harvests Internet text to surface linguistic “trends” the operators interpret as forecasts and package into paid ALTA reports and interviews [1] [2] [3]. Since 2010 the project has continued to publish analyses and make episodic, often vague predictions (earthquakes, political events, “time flutters,” cultural phenomena), while critics call the method secretive, pseudoscientific and retrospectively overfitted [4] [2] [5].

1. What the project is and how it says it works

HalfPastHuman is the commercial/communicative outlet for a methodology Clif High calls predictive linguistics—derived from the Web Bot algorithm—that “crawls” internet chatter, assigns dynamic lexicon values tied to emotional metrics, and produces interpreted reports (ALTA reports) for subscribers rather than peer-reviewed science, with much of the algorithm kept proprietary [2] [1] [5]. The creators frame the approach as mining a collective unconscious encoded in online language; in interviews High describes a mix of automated scraping and human interpretation, and the operation markets itself through forums, interviews and a repository of “timetalks” and archives [6] [3] [7].

2. The kinds of predictions publicly associated with HalfPastHuman

Public-facing outputs range from specific-sounding event warnings (natural disasters, infrastructure failures) to broad social and symbolic forecasts (media narratives, cultural trends and the so‑called “Mandela Effect”), and the project has a history of linking earlier high‑profile claims—such as 2012 cataclysm warnings and other retroactive attributions—to its method [2] [4] [7]. The project’s materials and external writeups reference predictions of earthquakes and other disruptive events and discuss phenomena labeled “time flutters” beginning about 2011, which High says showed up in his data [4] [7].

3. What it has claimed since 2010 — and the evidentiary limits

Since 2010 HalfPastHuman continued issuing ALTA-style analyses and public commentary—some suggesting increased likelihoods of earthquakes and social disruptions and others exploring collective-memory oddities like the Mandela Effect—but publicly available source material in this set does not provide a comprehensive, dated list of individual successful or failed predictions after 2010, only intermittent reports, interviews and archived webpages that discuss such themes [3] [1] [4]. Independent verification of specific post‑2010 predictions and their outcomes is limited in these sources; critics and encyclopedic entries note that many claimed hits are post-hoc and that the operators interpret noisy outputs with a great deal of human framing [2] [8].

4. Track record, reception and controversies

Followers and some promotional pages credit a noteworthy hit‑rate and list high‑profile retrospective claims (e.g., 9/11, anthrax, political upsets), while journalists and skeptics have criticized the project’s secrecy, vagueness and the possibility of retrospective fitting—Wikipedia’s Web Bot entry and other reporting describe the algorithm as secret, the lexicon as dynamic, and many predictions as too ambiguous to be meaningful [9] [2]. Documentary and fringe outlets have amplified the project’s claims [6] [4], while other observers emphasize methodological problems—circularity when the internet references the project itself can corrupt trends—and warn readers about commercialized, non‑peer‑reviewed forecasting [4] [5].

5. How to treat its post‑2010 claims as a reader or researcher

HalfPastHuman’s post‑2010 outputs are useful as a cultural artifact of internet‑era forecasting—showing how automated scraping plus narrative interpretation can produce compelling stories—but the sources at hand make clear the operation is proprietary, marketed, and often framed narratively rather than tested against transparent forecasting standards, so claims should be checked against contemporaneous records and skeptical analysis rather than taken at face value [1] [2] [3]. Where the project points to specific events (earthquakes, “time flutters,” media narratives), available reporting documents the claims being made but does not provide independent, systematic validation within these sources [4] [7].

Want to dive deeper?
What is the ALTA report methodology trademarked by Web Bot and how has it changed since 1997?
Which specific HalfPastHuman predictions from 2010–2025 can be independently verified against contemporaneous news records?
How do predictive‑linguistics methods like Web Bot compare to academic forecasting techniques in accuracy and epistemic transparency?