Free Hate at Scale: AI Extremism and the Attention Crisis

In May 2024, something unprecedented appeared on screens across Central Asia. A 52-second video in Pashto featured a news anchor calmly claiming responsibility for a terrorist attack in Bamiyan, Afghanistan. The anchor looked local, spoke fluently, and delivered the message with professional composure. There was just one problem: the anchor did not exist. The Islamic State Khorasan Province (ISKP) had produced its first AI-generated propaganda bulletin, and the implications for global security, content moderation, and the very architecture of our information ecosystem would prove profound.
This was not an isolated experiment. Days later, ISKP released another AI-driven segment, this time featuring a synthetic anchor dressed in Western attire to claim responsibility for a bombing in Kandahar. The terrorist organisation had discovered what Silicon Valley already knew: generative AI collapses the marginal cost of content production to nearly zero, whilst simultaneously expanding the potential for audience capture beyond anything previously imaginable.
The question now facing researchers, policymakers, and platform architects is not merely whether AI-generated extremist content poses a threat. That much is evident. The deeper concern is structural: what happens when the economics of inflammatory content production fundamentally shift in favour of those willing to exploit human psychological vulnerabilities at industrial scale? And what forms of intervention, if any, can address vulnerabilities that are built into the very architecture of our information systems?
The Economics of Information Pollution
To understand the stakes, one must first grasp the peculiar economics of the attention economy. Unlike traditional markets where production costs create natural barriers to entry, digital content operates under what economists call near-zero marginal cost conditions. Once the infrastructure exists, producing one additional piece of content costs essentially nothing. A research paper published on arXiv in 2025 frames the central challenge succinctly: “When the marginal cost of producing convincing but unverified content approaches zero, how can truth compete with noise?”
The arrival of large language models like GPT-4 and Claude represents what researchers describe as “a structural shift in the information production function.” This shift carries profound implications for the competitive dynamics between different types of content. Prior to generative AI, producing high-quality extremist propaganda required genuine human effort: scriptwriters, video editors, voice actors, translators. Each element imposed costs that naturally limited production volume. A terrorist organisation might release a dozen slickly produced videos annually. Now, the same organisation can generate thousands of variations in multiple languages, tailored to specific demographics, at effectively zero marginal cost.
The economic literature on this phenomenon identifies what researchers term a “production externality” in information markets. Producers of low-quality or harmful content do not internalise the negative social effects of their output. The social marginal cost vastly exceeds the private marginal cost, creating systematic incentives for information pollution. When generative AI capabilities (what some researchers term “offence”) dramatically outstrip detection technologies (“defence”), the marginal cost of producing harmful content falls precipitously, “systemically exacerbating harm.”
This creates what might be called a market bifurcation effect. Research suggests a “barbell” structure will emerge in content markets: low-end demand captured by AI at marginal cost, whilst human creators are forced into high-premium, high-complexity niches. The middle tier of content production essentially evaporates. For mainstream media and entertainment, this means competing against an infinite supply of machine-generated alternatives. For extremist content, it means the historical production barriers that limited proliferation have effectively disappeared.
The U.S. AI-powered content creation market alone was estimated at $198.4 million in 2024 and is projected to reach $741.1 million by 2033, growing at a compound annual growth rate of 15.8%. This explosive growth reflects businesses adopting AI tools to reduce time and costs associated with manual content creation. The same economics that drive legitimate business adoption, however, equally benefit those with malicious intent.
Algorithmic Amplification and the Vulnerability of Engagement Optimisation
The economics of production tell only half the story. The other half concerns distribution, and here the structural vulnerabilities of attention economies become starkly apparent.
Modern social media platforms operate on a simple principle: content that generates engagement receives algorithmic promotion. This engagement-optimisation model has proved extraordinarily effective at capturing human attention. It has also proved extraordinarily effective at amplifying inflammatory, sensational, and divisive material. As Tim Wu, the legal scholar who coined the term “net neutrality,” observed, algorithms “are optimised not for truth or well-being, but for engagement, frequently achieved through outrage, anxiety, or sensationalism.”
The empirical evidence for this amplification effect is substantial. Research demonstrates that false news spreads six times faster than truthful news on Twitter (now X), driven largely by the emotional content that algorithms prioritise. A landmark study published in Science in 2025 provided causal evidence for this dynamic. Researchers developed a platform-independent method to rerank participants' feeds in real time and conducted a preregistered 10-day field experiment with 1,256 participants on X during the 2024 US presidential campaign. The results were striking: decreasing or increasing exposure to antidemocratic attitudes and partisan animosity shifted participants' feelings about opposing political parties by more than 2 points on a 100-point scale. This effect was comparable to several years' worth of polarisation change measured in long-term surveys.
Research by scholars at MIT and elsewhere has shown that Twitter's algorithm amplifies divisive content far more than users' stated preferences would suggest. A systematic review synthesising a decade of peer-reviewed research (2015-2025) on algorithmic effects identified three consistent patterns: algorithmic systems structurally amplify ideological homogeneity; youth demonstrate partial awareness of algorithmic manipulation but face constraints from opaque recommender systems; and echo chambers foster both ideological polarisation and identity reinforcement.
The review also found significant platform-specific effects. Facebook is primarily linked to polarisation, YouTube is associated with radicalisation with particularly strong youth relevance, and Twitter/X emphasises echo chambers with moderate youth impact. Instagram and TikTok remain under-researched despite their enormous user bases, a concerning gap given TikTok's particularly opaque recommendation system.
The implications for AI-generated content are profound. If algorithms already preferentially amplify emotionally charged, divisive material created by humans, what happens when such material can be produced at unlimited scale with sophisticated personalisation? The answer, according to researchers at George Washington University's Program on Extremism, is that extremist groups can now “systematically exploit AI-driven recommendation algorithms, behavioural profiling mechanisms, and generative content systems to identify and target psychologically vulnerable populations, thereby circumventing traditional counterterrorism methodologies.”
The Weaponisation of Psychological Vulnerability
Perhaps the most concerning aspect of AI-enabled extremism is its capacity for psychological targeting at scale. Traditional propaganda operated as a broadcast medium: create a message, distribute it widely, hope it resonates with some fraction of the audience. AI-enabled propaganda operates as a precision instrument: identify psychological vulnerabilities, craft personalised messages, deliver them through algorithmically optimised channels.
Research published in Frontiers in Political Science in 2025 documented how “through analysing huge amounts of personal data, AI algorithms can tailor messages and content which appeal to a particular person's emotions, beliefs and grievances.” This capability transforms radicalisation from a relatively inefficient process into something approaching industrial production.
The numbers are sobering. A recent experiment estimated that AI-generated propaganda can persuade anywhere between 2,500 and 11,000 individuals per 100,000 targeted. Research participants who read propaganda generated by GPT-3 were nearly as persuaded as those who read real propaganda from state actors in Iran or Russia. Given that elections and social movements often turn on margins smaller than this, the potential for AI-generated influence operations to shift outcomes is substantial.
The real-world evidence is already emerging. In July 2024, Austrian authorities arrested several teenagers who were planning a terrorist attack at a Taylor Swift concert in Vienna. The investigation revealed that some suspects had been radicalised online, with TikTok serving as one of the platforms used to disseminate extremist content that influenced their beliefs and actions. The algorithm, optimised for engagement, had efficiently delivered radicalising material to psychologically vulnerable young people.
This is not a failure of content moderation in the traditional sense. It is a structural feature of engagement-optimised systems encountering content designed to exploit that optimisation. Research published in Frontiers in Social Psychology in 2025 found that TikTok's algorithms “privilege more extreme material, and through increased usage, users are gradually exposed to more and more misogynistic ideologies.” The algorithms actively amplify and direct harmful content, not as a bug, but as a consequence of their fundamental design logic.
The combination of psychological profiling and generative AI creates what researchers describe as an unprecedented threat vector. Leaders of extremist organisations are no longer constrained by language barriers, as AI translation capabilities expand their reach across linguistic boundaries. Propaganda materials can now be produced rapidly using just a few keywords. The introduction of deepfakes adds another dimension, enabling the misrepresentation of words or actions by public figures. As AI systems become more publicly available and open-source, the barriers to entry for their use continue to lower, making it easier for malicious actors to adopt AI technologies at scale.
The Collapse of Traditional Content Moderation
Faced with these challenges, platforms have relied on a suite of content moderation tools developed primarily for human-generated content. The most sophisticated of these is “fingerprinting” or hashing, which creates unique digital signatures for known harmful content and automatically removes matches across the platform. This approach has proved reasonably effective against the redistribution of existing terrorist videos and child sexual abuse material.
Generative AI renders this approach largely obsolete. According to research from the Combating Terrorism Center at West Point, “by manipulating their propaganda with generative AI, extremists can change a piece of content's digital fingerprint, rendering fingerprinting mute as a moderation tool.” A terrorist can now take existing propaganda, run it through an AI system that makes superficially minor modifications, and produce content that evades all hash-based detection whilst preserving the harmful message.
The scale challenge compounds this technical limitation. A 2024 report from Philosophy & Technology noted that “humans alone can't keep pace with the enormous volume of content that AI creates.” Most content moderation decisions are now made by machines, not human beings, and this is only set to accelerate. Automation amplifies human error, with biases embedded in training data and system design, whilst enforcement decisions happen rapidly, leaving limited opportunities for human oversight.
Traditional keyword and regex-based filters fare even worse. Research from the University of Chicago's Data Science Institute documented how “GenAI changes content moderation from a post-publication task to a real-time, model-layer challenge. Traditional filters, based on keywords or regex, fail to catch multilingual, evasive, or prompt-driven attacks.”
The detection arms race shows signs of favouring offence over defence. Research from Drexel University identified methods to detect AI-generated video through “fingerprints” unique to different generative models. However, as a Reuters Institute analysis noted, “deepfake creators are finding sophisticated ways to evade detection, so combating them remains a challenge.” Studies have demonstrated poorer performance of detection tools on certain types of content, and researchers warn of “a potential 'arms race' in technological detection, where increasingly sophisticated deepfakes may outpace detection methods.”
The gender dimension of this challenge deserves particular attention. Image-based sexual abuse is not new, but the explosion of generative AI tools to enable it marks a new era for gender-based harassment. For little or no cost, any individual with an internet connection and a photo can produce sexualised imagery of that person. The overwhelming majority of this content targets women and girls, ranging from teenagers to politicians and other public figures. This represents a form of AI-generated extremism that operates at the intersection of technology, misogyny, and the commodification of attention.
Platform Architecture and the Limits of Reform
If traditional content moderation cannot address the AI-generated extremism challenge, what about reforming platform architecture itself? Here the picture grows more complex, touching on fundamental questions about the design logic of attention economies.
The European Union has attempted the most comprehensive regulatory response to date. The Digital Services Act (DSA), which came into full force in 2024, imposes significant obligations on Very Large Online Platforms (VLOPs) with over 45 million monthly EU users. The law forces platforms to be more transparent about how their algorithmic systems work and holds them accountable for societal risks stemming from their services. Non-compliant platforms face fines up to 6% of annual global revenue. During the second quarter of 2024, the Commission publicly confirmed that it had initiated formal proceedings against several major online platforms, requiring detailed documentation on content moderation systems, algorithmic recommender systems, and advertising transparency.
The EU AI Act adds additional requirements specific to AI-generated content. Under this legislation, certain providers must detect and disclose manipulated content, and very large platforms must identify and mitigate systemic risks associated with synthetic content. China has gone further still: as of September 2025, all AI-generated content, whether text, image, video, or audio, must be labelled either explicitly or implicitly, with obligations imposed across service providers, platforms, app distributors, and users.
In February 2025, the European Commission released a new best-practice election toolkit under the Digital Services Act. This toolkit provides guidance for regulators working with platforms to address risks including hate speech, online harassment, and manipulation of public opinion, specifically including those involving AI-generated content and impersonation.
These regulatory frameworks represent important advances in transparency and accountability. Whether they can fundamentally alter the competitive dynamics between inflammatory and mainstream content remains uncertain. The DSA and AI Act address disclosure and risk mitigation, but they do not directly challenge the engagement-optimisation model that underlies algorithmic amplification. Platforms may become more transparent about how their algorithms work whilst those algorithms continue to preferentially promote outrage-inducing material.
Some researchers have proposed more radical architectural interventions. In her 2024 book “Invisible Rulers,” Renee DiResta, formerly of the Stanford Internet Observatory and now at Georgetown University's McCourt School of Public Policy, argued for changes that would make algorithms “reward accuracy, civility, and other values” rather than engagement alone. The Center for Humane Technology, co-founded by former Google design ethicist Tristan Harris, has advocated for similar reforms, arguing that “AI is following the same dangerous playbook” as social media, with “companies racing to deploy AI systems optimised for engagement and market dominance, not human wellbeing.”
Yet implementing such changes confronts formidable obstacles. The attention economy model has proved extraordinarily profitable. In 2024, private AI investment in the United States far outstripped that in the European Union, raising concerns that stringent regulation might simply shift innovation elsewhere. The EU Parliament's own analysis acknowledged that “regulatory complexity could be stifling innovation.” Meanwhile, research institutions dedicated to studying these problems face their own challenges: the Stanford Internet Observatory, which pioneered research into platform manipulation, was effectively dismantled in 2024 following political pressure, with its founding director Alex Stamos and research director Renee DiResta both departing after sustained attacks from politicians who alleged their work amounted to censorship.
The Philosophical Challenge: Can Human-Centred Frameworks Govern Hybrid Media?
Beyond the technical and economic challenges lies a deeper philosophical problem. Our frameworks for regulating speech, including the human rights principles that undergird them, were developed for human expression. What happens when expression becomes “hybrid,” generated or augmented by machines, with fluid authorship and unclear provenance?
Research published in Taylor & Francis journals in 2025 argued that “conventional human rights frameworks, particularly freedom of expression, are considered ill-equipped to govern increasingly hybrid media, where authorship and provenance are fluid, and emerging dilemmas hinge more on perceived value than rights violations.”
Consider the problem of synthetic personas. An AI can generate not just content but entire fake identities, complete with profile pictures, posting histories, and social connections. These synthetic personas can engage in discourse, build relationships with real humans, and gradually introduce radicalising content. From a traditional free speech perspective, we might ask: whose speech is this? The AI developer's? The user who prompted the generation? The corporation that hosts the platform? Each answer carries different implications for responsibility and remedy.
The provenance problem extends to detection. Even if we develop sophisticated tools to identify AI-generated content, what do we do with that information? Mandatory labelling, as China has implemented, assumes users will discount labelled content appropriately. But research on misinformation suggests that labels have limited effectiveness, particularly when content confirms existing beliefs. Moreover, as the Reuters Institute noted, “disclosure techniques such as visible and invisible watermarking, digital fingerprinting, labelling, and embedded metadata still need more refinement.” Malicious actors may circumvent these measures “by using jailbroken versions or creating their own non-compliant tools.”
There is also the question of whether gatekeeping mechanisms designed for human creativity can or should apply to machine-generated content. Copyright law, for instance, generally requires human authorship. Platform terms of service assume human users. Content moderation policies presuppose human judgment about context and intent. Each of these frameworks creaks under the weight of AI-generated content that mimics human expression without embodying human meaning.
The problem grows more acute when considering the speed at which these systems operate. Research from organisations like WITNESS has addressed how transparency in AI production can help mitigate confusion and lack of trust. However, the refinement of disclosure techniques remains ongoing, and the gap between what is technically possible and what is practically implemented continues to widen.
Emerging Architectures: Promise and Peril
Despite these challenges, researchers and technologists are exploring new approaches that might address the structural vulnerabilities of attention economies to AI-generated extremism.
One promising direction involves using large language models themselves for content moderation. Research published in Artificial Intelligence Review in 2025 explored how LLMs could revolutionise moderation economics. Once fine-tuned for the task, LLMs would be far less expensive to deploy than armies of human content reviewers. OpenAI has reported that using GPT-4 for content policy development and moderation enabled faster and more consistent policy iteration, reduced from months to hours, enhancing both accuracy and adaptability.
Yet this approach carries its own risks. Using AI to moderate AI creates recursive dependencies and potential failure modes. As one research paper noted, the tools and strategies used for content moderation “weren't built for GenAI.” LLMs can hallucinate, reflect bias from training data, and generate harmful content “without warning, even when the prompt looks safe.”
Another architectural approach involves restructuring recommendation algorithms themselves. The Science study on algorithmic polarisation demonstrated that simply reranking content to reduce exposure to antidemocratic attitudes and partisan animosity measurably shifted users' political attitudes. This suggests that alternative ranking criteria, prioritising accuracy or viewpoint diversity over engagement, could mitigate polarisation effects. However, implementing such changes would require platforms to sacrifice engagement metrics that directly drive advertising revenue. The economic incentives remain misaligned with social welfare.
Some researchers have proposed more fundamental interventions: breaking up large platforms, imposing algorithmic auditing requirements, creating public interest alternatives to commercial social media, or developing decentralised architectures that reduce the power of any single recommendation system. Each approach carries trade-offs and faces significant political and economic barriers.
Perhaps most intriguingly, some researchers have suggested using AI itself for counter-extremism. As one Hedayah research brief noted, “LLMs could impersonate an extremist and generate counter-narratives on forums, chatrooms, and social media platforms in a dynamic way, adjusting to content seen online in real-time. A model could inject enough uncertainty online to sow doubt among believers and overwhelm extremist channels with benign content.” The prospect of battling AI-generated extremism with AI-generated counter-extremism raises its own ethical questions, but it acknowledges the scale mismatch that human-only interventions cannot address.
The development of more advanced AI models continues apace. GPT-5, launched in August 2025, brings advanced reasoning capabilities in a multimodal interface. Its capabilities suggest a future moderation system capable of understanding context across formats with greater depth. Google's Gemini 2.5 family similarly combines speed, multimodal input handling, and advanced reasoning to tackle nuanced moderation scenarios in real time. Developers can customise content filters and system instructions for tailored moderation workflows. Yet the very capabilities that enable sophisticated moderation also enable sophisticated evasion.
The Attention Ecology and the Question of Cultural Baselines
The most profound concern may be the one hardest to address: the possibility that AI-generated extremism at scale could systematically shift cultural baselines over time. In an “attention ecology,” as researchers describe it, algorithms intervene in “the production, circulation, and legitimation of meaning by structuring knowledge hierarchies, ranking content, and determining visibility.”
If inflammatory content consistently outcompetes moderate content for algorithmic promotion, and if AI enables the production of inflammatory content at unlimited scale, then the information environment itself shifts toward extremism, not through any single piece of content but through the aggregate effect of millions of interactions optimised for engagement.
Research on information pollution describes this as a “congestion externality.” In a digital economy where human attention is the scarce constraint, an exponential increase in synthetic content alters the signal-to-noise ratio. As the cost of producing “plausible but mediocre” content vanishes, platforms face a flood of synthetic noise. The question becomes whether quality content, however defined, can maintain visibility against this tide.
A 2020 Pew Research Center survey found that 64% of Americans believed social media had a mostly negative effect on the direction of the country. This perception preceded the current wave of AI-generated content. If attention economies were already struggling to balance engagement optimisation with social welfare, the introduction of AI-generated content at scale suggests those struggles will intensify.
The cultural baseline question connects to democratic governance in troubling ways. During the 2024 election year, researchers documented deepfake audio and video targeting politicians across multiple countries. In Taiwan, deepfake audio of a politician endorsing another candidate surfaced on YouTube. In the United Kingdom, fake clips targeted politicians across the political spectrum. In India, where over half a billion voters went to the polls, people were reportedly “bombarded with political deepfakes.” These instances represent early experiments with a technology whose capabilities expand rapidly.
Technical Feasibility and Political Will
Can interventions address these structural vulnerabilities? The technical answer is uncertain. Detection technologies continue to improve, but they face a fundamental asymmetry: defenders must identify all harmful content, whilst attackers need only evade detection some of the time. Watermarking and provenance systems show promise but can be circumvented by determined actors using open-source tools or jailbroken models.
The political answer is perhaps more concerning. The researchers and institutions best positioned to study these problems have faced sustained attacks. The Stanford Internet Observatory's effective closure in 2024 followed “lawsuits, subpoenas, document requests from right-wing politicians and non-profits that cost millions to defend, even when vindicated by the US Supreme Court in June 2024.” The lab will not conduct research into any future elections. This chilling effect on research occurs precisely when such research is most needed.
Meanwhile, the economic incentives of major platforms remain oriented toward engagement maximisation. The EU's regulatory interventions, however significant, operate at the margins of business models that reward attention capture above all else. The 2024 US presidential campaign occurred in an information environment shaped by algorithmic amplification of divisive content, with AI-generated material adding new dimensions of manipulation.
There is also the question of global coordination. Regulatory frameworks developed in the EU or US have limited reach in jurisdictions that host extremist content or provide AI tools to bad actors. The ISKP videos that opened this article were not produced in Brussels or Washington. Addressing AI-generated extremism requires international cooperation at a moment when geopolitical tensions make such cooperation difficult.
Internal documents from major platforms have occasionally offered glimpses of the scale of the problem. One revealed that 64% of users who joined extremist groups on Facebook did so “due to recommendation tools.” According to the Mozilla Foundation's “YouTube Regrets” report, 12% of content recommended by YouTube's algorithms violates the company's own community standards. These figures predate the current wave of AI-generated content. The integration of generative AI into content ecosystems has only expanded the surface area for algorithmic radicalisation.
What Happens When Outrage is Free?
The fundamental question raised by AI-generated extremist content concerns the sustainability of attention economies as currently constructed. These systems were designed for an era when content production carried meaningful costs and human judgment imposed natural limits on the volume and extremity of available material. Neither condition obtains in an age of generative AI.
The structural vulnerabilities are not bugs to be patched but features of systems optimised for engagement in a competitive marketplace for attention. Algorithmic amplification of inflammatory content is the logical outcome of engagement optimisation. AI-generated extremism at scale is the logical outcome of near-zero marginal production costs. Traditional content moderation cannot address dynamics that emerge from the fundamental architecture of the systems themselves.
This does not mean the situation is hopeless. The research cited throughout this article points toward potential interventions: algorithmic reform, regulatory requirements for transparency and risk mitigation, AI-powered counter-narratives, architectural redesigns that prioritise different values. Each approach faces obstacles, but obstacles are not impossibilities.
What seems clear is that the current equilibrium is unstable. Attention economies that reward engagement above all else will increasingly be flooded with AI-generated content designed to exploit human psychological vulnerabilities. The competitive dynamics between inflammatory and mainstream content will continue to shift toward the former as production costs approach zero. Traditional gatekeeping mechanisms will continue to erode as detection fails to keep pace with generation.
The choices facing societies are not technical alone but political and philosophical. What values should govern information ecosystems? What responsibilities do platforms bear for the content their algorithms promote? What role should public institutions play in shaping attention markets? And perhaps most fundamentally: can liberal democracies sustain themselves in information environments systematically optimised for outrage?
These questions have no easy answers. But they demand attention, perhaps the scarcest resource of all.
References & Sources
Combating Terrorism Center at West Point, “Generating Terror: The Risks of Generative AI Exploitation,” CTC Sentinel, Volume 17, Issue 1, January 2024. https://ctc.westpoint.edu/generating-terror-the-risks-of-generative-ai-exploitation/
Frontiers in Political Science, “The role of artificial intelligence in radicalisation, recruitment and terrorist propaganda,” 2025. https://www.frontiersin.org/journals/political-science/articles/10.3389/fpos.2025.1718396/full
Frontiers in Social Psychology, “Social media, AI, and the rise of extremism during intergroup conflict,” 2025. https://www.frontiersin.org/journals/social-psychology/articles/10.3389/frsps.2025.1711791/full
Science, “Reranking partisan animosity in algorithmic social media feeds alters affective polarization,” 2025. https://www.science.org/doi/10.1126/science.adu5584
GNET Research, “Automated Recruitment: Artificial Intelligence, ISKP, and Extremist Radicalisation,” April 2025. https://gnet-research.org/2025/04/11/automated-recruitment-artificial-intelligence-iskp-and-extremist-radicalisation/
GNET Research, “The Feed That Shapes Us: Extremism and Adolescence in the Age of Algorithms,” December 2025. https://gnet-research.org/2025/12/12/the-feed-that-shapes-us-extremism-and-adolescence-in-the-age-of-algorithms/
arXiv, “The Economics of Information Pollution in the Age of AI,” 2025. https://arxiv.org/html/2509.13729
arXiv, “Rewarding Engagement and Personalization in Popularity-Based Rankings Amplifies Extremism and Polarization,” 2025. https://arxiv.org/html/2510.24354v1
Georgetown Law, “The Attention Economy and the Collapse of Cognitive Autonomy,” Denny Center for Democratic Capitalism. https://www.law.georgetown.edu/denny-center/blog/the-attention-economy/
George Washington University Program on Extremism, “Artificial Intelligence and Radicalism: Risks and Opportunities.” https://extremism.gwu.edu/artificial-intelligence-and-radicalism-risks-and-opportunities
International Centre for Counter-Terrorism (ICCT), “The Radicalization (and Counter-radicalization) Potential of Artificial Intelligence.” https://icct.nl/publication/radicalization-and-counter-radicalization-potential-artificial-intelligence
Hedayah, “AI for Counter-Extremism Research Brief,” April 2025. https://hedayah.com/app/uploads/2025/06/Hedayah-Research-Brief-AI-for-Counter-Extremism-April-2025-Design-DRAFT-28.04.25-v2.pdf
Philosophy & Technology (PMC), “Moderating Synthetic Content: the Challenge of Generative AI,” 2024. https://pmc.ncbi.nlm.nih.gov/articles/PMC11561028/
Taylor & Francis, “Platforms as architects of AI influence: rethinking moderation in the age of hybrid expression,” 2025. https://www.tandfonline.com/doi/full/10.1080/20414005.2025.2562681
Taylor & Francis, “The Ghost in the Machine: Counterterrorism in the Age of Artificial Intelligence,” 2025. https://www.tandfonline.com/doi/full/10.1080/1057610X.2025.2475850
European Commission, “The Digital Services Act,” Digital Strategy. https://digital-strategy.ec.europa.eu/en/policies/digital-services-act
AlgorithmWatch, “A guide to the Digital Services Act.” https://algorithmwatch.org/en/dsa-explained/
TechPolicy.Press, “The Digital Services Act Meets the AI Act: Bridging Platform and AI Governance.” https://www.techpolicy.press/the-digital-services-act-meets-the-ai-act-bridging-platform-and-ai-governance/
ISD Global, “Towards transparent recommender systems: Lessons from TikTok research ahead of the 2025 German federal election.” https://www.isdglobal.org/digital_dispatches/towards-transparent-recommender-systems-lessons-from-tiktok-research-ahead-of-the-2025-german-federal-election/
Reuters Institute for the Study of Journalism, “Spotting the deepfakes in this year of elections: how AI detection tools work and where they fail.” https://reutersinstitute.politics.ox.ac.uk/news/spotting-deepfakes-year-elections-how-ai-detection-tools-work-and-where-they-fail
U.S. GAO, “Science & Tech Spotlight: Combating Deepfakes,” 2024. https://www.gao.gov/products/gao-24-107292
World Economic Forum, “4 ways to future-proof against deepfakes in 2024 and beyond,” February 2024. https://www.weforum.org/stories/2024/02/4-ways-to-future-proof-against-deepfakes-in-2024-and-beyond/
Springer, “Content moderation by LLM: from accuracy to legitimacy,” Artificial Intelligence Review, 2025. https://link.springer.com/article/10.1007/s10462-025-11328-1
Meta Oversight Board, “Content Moderation in a New Era for AI and Automation.” https://www.oversightboard.com/news/content-moderation-in-a-new-era-for-ai-automation/
MDPI, “Trap of Social Media Algorithms: A Systematic Review of Research on Filter Bubbles, Echo Chambers, and Their Impact on Youth,” 2024. https://www.mdpi.com/2075-4698/15/11/301
Center for Humane Technology. https://www.humanetech.com/
NPR, “A major disinformation research team's future is uncertain after political attacks,” June 2024. https://www.npr.org/2024/06/14/g-s1-4570/a-major-disinformation-research-teams-future-is-uncertain-after-political-attacks
Platformer, “The Stanford Internet Observatory is being dismantled,” 2024. https://www.platformer.news/stanford-internet-observatory-shutdown-stamos-diresta-sio/
Grand View Research, “U.S. AI-Powered Content Creation Market Report, 2033.” https://www.grandviewresearch.com/industry-analysis/us-ai-powered-content-creation-market-report
Mozilla Foundation, “YouTube Regrets” Report. https://foundation.mozilla.org/
Pew Research Center, “Americans' Views of Technology Companies,” 2020. https://www.pewresearch.org/

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk