Keep Factually independent

Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.

Loading...Goal: 1,000 supporters
Loading...

Are there peer-reviewed studies evaluating Edgar Cayce's predictions versus chance?

Checked on November 17, 2025
Disclaimer: Factually can make mistakes. Please verify important info or breaking news. Learn more.

Executive summary

There is substantial popular and archival reporting on Edgar Cayce’s predictions — many books, websites, and commentaries catalog his readings and note hits and misses (e.g., predictions about Bimini, stock-market warnings, world events) [1][2][3]. However, the search results provided do not show any peer‑reviewed, statistical studies that systematically evaluate Cayce’s predictions against chance rates; available sources do not mention peer‑reviewed statistical analyses comparing Cayce’s hit rate to random expectation (not found in current reporting).

1. What the literature catalogues: prolific readings and selective examples

Edgar Cayce produced thousands of “readings” on health, history and future events; popular books and repositories list many specific predictions—Atlantis/Bimini, economic warnings and “earth changes”—and these sources often point to apparent successes such as the 1968 Bimini find or alleged early warnings about 1929 [1][2][3]. These compilations serve as the raw material for any empirical test because they assemble candidate predictions and dates that could be scored for accuracy [1][3].

2. What popular commentators and organizations claim

Advocates and Cayce institutions highlight successful or interpretable hits and argue that some of Cayce’s geopolitical and geophysical forecasts were prescient (e.g., China’s rise, rediscovery myths) [4][3]. Skeptical or critical commentators emphasize failed literal predictions—such as a 1998 Second Coming or large cataclysms by 1998—and note reinterpretation or flexible dating as a common post‑hoc rescue strategy [5][6][7]. Both sides use the same readings but differ sharply in how strictly they score “success.”

3. Is there peer‑reviewed statistical work? Short answer: not in these sources

None of the provided items — which include books, museum articles, fan/criticism websites, and encyclopedia entries — present a peer‑reviewed, published statistical analysis that tests whether Cayce’s prediction accuracy exceeds chance levels (not found in current reporting). The Wikipedia entry, Popular articles, and religious critique pieces summarize claims and specific examples but do not substitute for controlled, peer‑reviewed evaluation [8][2][7].

4. Why a rigorous test is hard: ambiguous predictions and hindsight bias

The materials show that many readings are vague, metaphorical, or open to reinterpretation (for example, spiritual “battles” or symbolic dates), which makes objective scoring difficult; critics point out that failed literal predictions (e.g., 1998 events) are often reframed by followers as nonliteral or conditional [5][6]. This ambiguity plus selective reporting (cataloging successes while minimizing misses) creates a strong risk of hindsight bias and makes a straightforward statistical null hypothesis test challenging [5][6].

5. What a credible peer‑reviewed study would require

A credible test would (a) predefine which specific statements count as predictions, (b) assign objective outcome criteria and time windows, (c) compare observed success rates to well‑specified chance models, and (d) be published in a peer‑reviewed forum so methods and coding could be evaluated. The current sources document the raw readings and debates about interpretation but do not report such a study or methodology [1][8].

6. Existing reporting highlights both hits and misses — showing contested evidence

Popular reporters and commentators record both putative hits (e.g., the Bimini formation, alleged economic warnings) and clear misses (e.g., predicted Second Coming or catastrophic Earth changes by specified dates) — which demonstrates why different analysts reach opposite conclusions depending on scoring rules [2][3][5]. The Christian Research Institute piece explicitly lists both claimed successes and known failures, underscoring the contested nature of Cayce’s record [7].

7. Bottom line for your question — what we can and cannot conclude from available sources

Available sources document many predictions and vigorous debate about their interpretation, but they do not contain peer‑reviewed statistical analyses comparing Cayce’s predictions to chance; therefore you cannot point to a published, peer‑reviewed study in these materials that answers your original question (not found in current reporting) [1][8][7]. If you want a definitive answer, the next step would be to assemble a reproducible dataset from the readings, set objective scoring rules, and submit the analysis to peer review.

If you’d like, I can outline a protocol you could use to test Cayce’s predictions against chance (selection rules, scoring rubric, and statistical tests) drawing only from the readings and timelines cited in these sources.

Want to dive deeper?
Have any peer-reviewed statistical analyses tested the accuracy of Edgar Cayce's predictions against chance expectations?
Which journals or researchers have published studies on the predictive validity of Edgar Cayce's readings?
What methodologies do studies use to compare prophetic claims like Cayce's to random chance or null models?
How do critiques and defenses of Cayce's predictions address issues like hindsight bias, vague wording, and date ranges?
Are there comparable peer-reviewed evaluations of other 20th-century psychics that shed light on Cayce's predictive record?