Keep Factually independent

Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.

Loading...Time left: ...
Loading...Goal: $500

Fact check: Percentage of social media that is fake

Checked on August 21, 2025

1. Summary of the results

The analyses reveal that no definitive percentage exists for how much of social media content is fake. However, several specific data points emerge from the research:

Fake Accounts on Major Platforms:

  • Facebook reported that 13-14% of users were duplicate or false accounts by 2017, up from 83 million fake users in 2012 [1]
  • Twitter estimated that less than 5% of their users were false or spam accounts [1]
  • Research indicates 80% of surveyed people have encountered suspicious or fake accounts, with 77% receiving connection requests from strangers [2]

Misinformation Content:

  • 66% of bots discussing COVID-19 were spreading misinformation [3]
  • An estimated 500,000 deepfakes were shared on social media in 2023 [3]

2. Missing context/alternative viewpoints

The original question assumes a measurable percentage of "fake" social media exists, but the analyses reveal significant epistemic uncertainty in measuring this phenomenon [1]. Several critical contexts are missing:

Definition Ambiguity:

  • The term "fake" encompasses multiple categories: fake accounts, misinformation content, deepfakes, spam, and duplicate profiles
  • Different platforms use varying methodologies to identify and count fake content

Platform Incentives:

  • Social media companies benefit financially from higher user counts and engagement metrics, potentially creating incentives to underreport fake account percentages
  • Advertisers and investors rely on user statistics for decision-making, creating pressure for platforms to present favorable numbers

Detection Limitations:

  • The analyses suggest that actual numbers of fake accounts could be higher than reported estimates [1]
  • Sophisticated fake accounts may evade detection systems entirely

3. Potential misinformation/bias in the original statement

The original question contains an implicit assumption that a definitive percentage exists when research shows this is fundamentally unmeasurable with current methods. This framing could be misleading because:

Oversimplification:

  • The question treats "fake social media" as a monolithic, quantifiable entity when it encompasses diverse phenomena with different measurement challenges
  • Platform-specific variations make any universal percentage meaningless, as Twitter's estimated <5% fake accounts differs dramatically from Facebook's 13-14% [1]

Lack of Temporal Context:

  • The question ignores that fake content percentages fluctuate constantly as platforms implement countermeasures and bad actors adapt their tactics
  • Historical data shows increasing trends in some categories, such as Facebook's fake accounts rising from 2012 to 2017 [1]

Missing Nuance:

  • The binary "fake vs. real" framework fails to account for gray areas like parody accounts, automated but legitimate bots, or accounts with mixed authentic and inauthentic behavior
Want to dive deeper?
What percentage of Twitter accounts are fake?
How does Facebook detect and remove fake accounts?
What is the impact of fake social media accounts on election results?
Can AI algorithms effectively identify fake social media profiles?
How do fake social media accounts affect mental health and online discourse?