How did propaganda influence peoples opinions or behaviors
This fact-check may be outdated. Consider refreshing it to get the most current information.
Executive summary
Propaganda alters opinions and behaviors by framing facts, exploiting emotions, and amplifying messages through mass and social media; historical campaigns like the 1920s “Torches of Freedom” and modern AI-driven operations show both long-term cultural shifts and rapid, targeted influence [1] [2]. Scholars and policy analysts warn that AI and digital platforms have expanded scale and subtlety—68.7% of the world were internet users by April 2025, widening propaganda’s reach [3].
1. How propaganda changes minds: framing, repetition and emotion
Propaganda works by selecting and repeating frames that turn complex issues into simple, emotionally charged narratives; scholars note propaganda’s deliberate use of misleading or selective information to promote agendas, using emotional appeals, repetition and social cues to make ideas feel normative and urgent [4] [5]. Historical research on campaigns such as Edward Bernays’ Torches of Freedom shows how imagery and storytelling redefined social taboos—normalising women’s smoking by linking it to liberation—producing measurable cultural change over decades [1].
2. Technology amplified: from posters to persona-driven AI
The mechanics are the same but the tools changed. A century after wartime poster campaigns, state and nonstate actors now weaponize social media, geofencing and tailored ads; targeted digital influence can deliver political messaging to congregations, demographic niches or even specific churches during moments of worship [6]. Recent reporting documents AI-driven “personas” that can converse and adapt in real time, creating content nearly indistinguishable from genuine interaction and enabling scale and stealth previously impossible [2].
3. Evidence of behavioral effects: elections, attitudes, and human rights risks
Empirical work shows propaganda can shift attitudes about governance and democracy: research on “political demonstration effects” finds positive messaging about authoritarian performance can reduce public support for democracy abroad, suggesting regime-focused narratives change system-level views over time [7]. In conflict zones, propaganda frames have been used to justify rights violations and mobilise populations—research flags the role of messaging in shaping fear, bias and collective responses during crises [8].
4. The social-media multiplier: echo chambers and misinformation flows
Social platforms magnify propaganda by creating algorithmic echo chambers where emotionally salient content spreads faster than nuance; academic reviews argue social media has become a platform for fake news and propaganda that can steer specific audiences toward particular ways of thinking, straining democratic deliberation [9]. Analysts of the 2024 U.S. campaigns documented how AI tools and disinformation changed campaign strategy, shifting focus from personalities to manipulating core policy attitudes and ideological fault lines [10].
5. New tools, new vulnerabilities: generative AI and state-backed campaigns
Policy and scientific studies warn generative AI increases both capacity and plausibility of disinformation: experiments and real-world campaigns show AI can produce persuasive, credible text and synthetic media that erode trust, while open-source investigations document operational deployments in Hong Kong, Taiwan and preparations to expand elsewhere [11] [2]. Experts call this a qualitative shift from blunt mass messaging to adaptive, conversational influence designed “not to shock, but to slip silently” into everyday discourse [2].
6. Who is affected—and how long the effects last
Sources note effects vary by context, message, and medium: some propaganda yields immediate poll swings, others rewire social norms over generations [12]. Cross-cultural differences, media literacy and alternative information channels mediate susceptibility; scholars underline that short-term attitudinal shifts may not equal permanent behavioural change, but sustained, well-targeted campaigns can produce durable shifts in public opinion [12].
7. Policy and civic responses: labeling, resilience and regulation
Authorities and researchers propose layered responses: stricter labeling and metadata rules for AI-generated media, platform transparency, and public education to boost media literacy [3]. Governments and NGOs debate legal and technical remedies; available sources do not mention a single agreed global solution and report a mix of voluntary industry measures, national laws and international discussions without consensus [3].
8. Takeaways for readers: how to see the levers
Recognize three visible levers when evaluating persuasive messaging: framing (what’s omitted), emotion (fear/aspiration), and distribution (who amplifies it). Historical and contemporary studies—from Bernays’ advertising-era campaigns to state-backed AI influence—demonstrate that propaganda succeeds when it appears ordinary and aligns with existing biases; countermeasures require both platform rules and informed citizens [1] [2] [9].
Limitations: this analysis synthesizes the provided sources; it does not include every study on propaganda’s effects and available sources do not mention some specific causal estimates (for example, exact vote-share shifts attributable to individual campaigns).