How has the 2023–2025 Israel-Hamas war affected paid social media promotion and misinformation?

Checked on November 28, 2025
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

Paid promotion and platform tactics have been a major front in the information war around the Israel–Hamas fighting: platforms and researchers say paid and coordinated campaigns, influencer trips and state-linked messaging have amplified competing narratives, while researchers report moderation pullbacks that made paid and organic misinformation harder to track [1] [2]. Independent trackers and news outlets documented waves of viral falsehoods (millions of views on early false claims) and rapid removals or labeling by platforms, even as watchdogs warn that paid amplification and platform policy changes sustained the “fog of war” online [3] [4] [2].

1. Paid persuasion became explicit — governments and diaspora fund campaigns

Reporting shows that state-linked actors and diaspora offices began paying influencers and organizing trips to shape impressions of conditions on the ground; for example, reporting tied paid influencer trips to Gaza organized by Israeli diaspora bodies to attempts “to reveal the truth,” illustrating how sponsored on-the-ground tours entered the information mix [5] [1]. The New Arab and other outlets described coordinated outreach by Israeli officials to pro-Israel creators and paid efforts to polish public perception amid international criticism [1].

2. Platforms used paid tools even as moderation shifted — complicating traceability

Researchers say that changes in moderation policy and tools made it harder to track both organic misinformation and paid amplification; a university trust-in-AI center warned that misinformation about the conflict became harder to verify and trace as platforms pared back moderation and researcher access to data [2]. News organizations and EU regulators pressured platforms like X and Meta to remove or label content, and platforms reported removing large volumes of posts — a sign that paid promotion operated alongside efforts to police content [4] [6].

3. Misinformation reached huge audiences quickly — paid and viral narratives intertwined

Early studies and trackers documented rapid spread: NewsGuard and Time reported that at least 14 false claims drew some 22 million views across X, TikTok and Instagram within days of the October 2023 attacks, showing how quickly false narratives—sometimes boosted by paid or verified accounts—can achieve massive reach [3]. Reuters and The Guardian catalogued doctored memos and staged-appearance videos that went viral, demonstrating how paid ads, algorithmic boosts and influencer reposts can all magnify misinformation [7] [8].

4. Two-sided information strategies: both state and non-state actors weaponized social media

Analysts and reporting emphasize that the information environment was not one-sided: Hamas’s media wing and pro-Palestinian diasporas were active online, while pro-Israel diaspora mobilization and alleged state-linked campaigns sought to counter narratives and persuade foreign audiences and lawmakers [9] [1]. RAND and the Wilson Center warned that both sides use imagery and messaging tactically to sway international opinion, turning paid promotion and influencer outreach into tools of strategic communication as much as advertising [10] [11].

5. Platforms responded with removals, labels and claims of mass takedowns — but critics say transparency is lacking

Platforms announced large takedowns — for instance, TikTok reported removing hundreds of thousands of videos related to the conflict — yet reporting and leaked documents led to accusations that moderation sometimes disproportionately affected critical Palestinian content and that platforms lacked transparency about paid content and enforcement [12] [5]. Regulators in the EU pressed platforms to comply with law and remove illegal content, underlining the political pressure around moderation decisions [6].

6. Research limits and contested claims — what sources do and do not say

Academics and policy centers documented the scale of misinformation and linked heavy social-media exposure to psychological effects, but available reporting does not provide a comprehensive, auditable accounting of how much paid advertising versus organic sharing drove specific viral falsehoods; researchers note that data access restrictions and platform policy changes made precise attribution difficult [2] [13]. Some outlets and institutes produced debunks and tracking centers cataloguing false claims, but none of the provided sources claim to have complete measurement of paid-promotion budgets or all sponsored posts [14] [3].

7. Why this matters: public opinion, policy and the “digital fog of war”

The interplay of paid promotion, influencer trips and platform moderation shaped public opinion shifts and policy debates: polling and long-form reporting show U.S. and global attitudes evolving over time as social media imagery became central to the story, reinforcing that who can pay to amplify a message matters to democratic debate in wartime [15] [16]. Observers warn the combination of paid messaging and viral misinformation risks inflaming conflict, obscuring accountability and hardening international divisions [7] [11].

Limitations: reporting and academic work cited here document trends and episodes but do not supply a single, comprehensive dataset that quantifies total paid spend, nor do they resolve contested claims about the proportional effect of paid promotion versus organic virality; available sources do not mention a definitive, global tally of paid adverts tied to state actors [2] [3].

Want to dive deeper?
How have ad spend and CPMs on Facebook, X, TikTok, and YouTube changed since Oct 2023 for MENA-focused campaigns?
What policies have major platforms implemented since 2023 to curb war-related misinformation and how effective are their enforcement metrics?
How have political advertisers and foreign actors exploited paid amplification during the Israel-Hamas war from 2023–2025?
What role did algorithmic content recommendation changes play in amplifying or limiting conflict-related disinformation?
How have civil society and fact‑checking groups adapted paid promotion strategies to counter war misinformation since 2023?