
Header image © Citizen of Europe 2025 — Design by Citizen ofEurope
social media algorithm anger 2025 — Introduction
The EU’s Digital Services Act promises transparency. The AI Act promises fairness. Yet neither changes the fundamental incentive that still drives the internet: emotional conflict keeps users engaged. Two decades of research and platform disclosures confirm that moral-emotional content often outperforms neutral speech in reach and reactions.
Why It Matters
Europe hopes transparency and audits will disinfect digital power. But algorithms optimise for engagement, not accuracy. Studies show moral-emotional language increases diffusion, especially in political contexts. The deeper logic is emotional economics: attention follows outrage.
The Business of Being Mad
Internal documents reported by major outlets showed that, beginning in 2017, Facebook’s ranking treated emoji reactions as far more valuable than “likes” — with “angry” reactions carrying extra weight — before the multiplier was reduced in 2021. This created an incentive for anger-eliciting posts to travel further. Peer-reviewed research aligns with that pattern: a 2017 PNAS study (Brady et al.; doi:10.1073/pnas.1618923114) found tweets using moral-emotional words were about 20% more likely to be shared, and later work associates moralised language with more corrosive reply dynamics on large platforms. Together, they show how emotion becomes a ranking signal.
Europe’s Transparency Without Teeth
Under the Digital Services Act (DSA), very large online platforms must publish systemic-risk assessments and describe mitigations. Some public VLOP reports discuss risks linked to polarising or harmful content; public versions do not disclose the ranking weights that govern emotional amplification. The DSA mandates disclosure and audits; it does not require publication of the actual weights. Regulators can review methodologies but still lack forensic access to how engagement signals are weighted internally.
For enforcement context: the European Commission opened formal DSA proceedings against X on 17–18 December 2023 and issued preliminary breach findings on 12 July 2024. Platforms say they have reduced the influence of provocative signals and expanded user-choice tools (e.g., non-personalised/chronological feeds). The Commission indicates enforcement will deepen in 2026.
The Rise of Emotion AI
A parallel market is expanding: emotion-recognition systems that infer affect from text, audio, or video to optimise advertising and content tests. The EU AI Act prohibits emotion-recognition in settings such as schools and workplaces and restricts certain biometric uses, but it is not a blanket ban across marketing or entertainment. This leaves a significant policy gap outside protected domains. UNESCO warns that Emotion-AI risks turning inner states into monetisable data and intensifying incentive structures around outrage.
The Evidence
Public documentation already demonstrates the emotional-profit link. Facebook’s reaction-weighting episode showed that anger could be treated as a first-class ranking feature before the multiplier was dialled down. Independent academic work (PNAS, PNAS Nexus) finds that moral-emotional language boosts diffusion and is associated with more toxic reply chains. Different sources, consistent conclusion: emotional intensity is rewarded because it holds attention.
The Transatlantic Disconnect
US policy emphasises market transparency and speech protections; EU law treats amplification as a platform-risk issue. As of today, neither framework compels disclosure of the ranking weights that map emotion to reach. EU guidance addresses profiling and researcher access but stops short of mandating publication of those weights.
What Needs to Change
- Publish signal categories: disclose which classes of signals (e.g., moral-emotional cues, rapid-reply bursts, repost chains) are boosted or dampened — without revealing proprietary numbers.
- Independent outcome audits: allow accredited researchers to run pre-registered tests on whether non-personalised feeds reduce emotional skew and harmful spillovers.
- Public valence reports: require quarterly indicators showing the share of impressions driven by high-valence content versus neutral posts in civic/political topics.
- User-controlled friction: offer optional prompts or short delays on high-velocity, high-valence threads; publish anonymised impact data for research.
The Final Word
Europe can fine, audit, and demand risk reports. As long as emotional engagement tracks revenue, regulation will stop where the anger starts. The algorithm isn’t broken; it is working as designed.
Sources & Further Reading
- Merrill, J. & Oremus, W. (2021). “Five points for anger, one for a ‘like’: How Facebook’s formula fostered rage and misinformation.” The Washington Post.
- Brady, W.J., Wills, J.A., Jost, J.T., Tucker, J.A. & Van Bavel, J.J. (2017). “Emotion shapes the diffusion of moralized content in social networks.” PNAS. doi:10.1073/pnas.1618923114
- Solovev, K. et al. (2023). “Moralized language predicts hate speech on social media.” PNAS Nexus. doi:10.1093/pnasnexus/pgac281
- European Commission (2023). “Commission opens formal proceedings against X under the Digital Services Act.”
- European Commission (2024). “Commission addresses additional investigatory measures for X in ongoing proceedings under the DSA.”
- European Commission (2024). “Digital Services Act Enforcement and Transparency Framework.”
- European Parliament & Council (2025). “The EU Artificial Intelligence Act — Official text and summary.”
- UNESCO (2025). “Ethics of Artificial Intelligence — Emotion AI and Human Rights Guidance.”
- European Parliamentary Research Service (2024). “Enforcing the Digital Services Act: State of Play.”
Fact-Checking & Transparency
All statements verified as of October 2025 using publicly available, on-record sources: major-outlet reporting on Facebook reaction weighting (2018–2021); PNAS (Brady et al., 2017, doi:10.1073/pnas.1618923114); PNAS Nexus analyses associating moralised exchanges with more toxic reply dynamics; European Parliament & Commission AI Act explainers (2024–2025); public DSA VLOP risk reports (e.g., Pinterest 2024); and EU enforcement notices regarding X (Dec 2023 / Jul 2024). Platforms approached for comment declined or did not respond by publication. Corrections are logged on our Transparency page.
Ethical Transparency
Produced under NVJ/IFJ ethics with RVJ accountability. Compliant with DSA/GDPR. No AI-generated reporting; automation used solely for layout and image formatting under Citizen of Europe’s AI Content Policy 2025.
Follow Us
Support Our Work
Independent journalism takes time, resources, and courage. If you value sharp, unfiltered analysis, help us stay independent by visiting our dedicated support page.
👉 Go to Support PageDisclaimer: This article relies solely on publicly available sources and verifiable data as of October 2025. It adheres to NVJ, IFJ, and Raad voor de Journalistiek standards for fairness, accuracy, and accountability. No anonymous or AI-generated reporting was used; automation was limited to formatting under the AI Content Policy 2025. For methodology, updates, and source documentation, read our Fact-Checking & Transparency page.



