Consent Cannot Be Optional: The Uncomfortable Truth About AI Freedom

The interface is deliberately simple. A chat window, a character selection screen, and a promise that might make Silicon Valley's content moderators wince: no filters, no judgement, no limits. Platforms like Soulfun and Lovechat have carved out a peculiar niche in the artificial intelligence landscape, offering what their creators call “authentic connection” and what their critics label a dangerous abdication of responsibility. They represent the vanguard of unfiltered AI, where algorithms trained on the breadth of human expression can discuss, create, and simulate virtually anything a user desires, including the explicitly sexual content that mainstream platforms rigorously exclude.

This is the frontier where technology journalism meets philosophy, where code collides with consent, and where the question “what should AI be allowed to do?” transforms into the far thornier “who decides, and who pays the price when we get it wrong?”

As we grant artificial intelligence unprecedented access to our imaginations, desires, and darkest impulses, we find ourselves navigating territory that legal frameworks have yet to map and moral intuitions struggle to parse. The platforms promising liberation from “mainstream censorship” have become battlegrounds in a conflict that extends far beyond technology into questions of expression, identity, exploitation, and harm. Are unfiltered AI systems the vital sanctuary their defenders claim, offering marginalised communities and curious adults a space for authentic self-expression? Or are they merely convenient architecture for normalising non-consensual deepfakes, sidestepping essential safeguards, and unleashing consequences we cannot yet fully comprehend?

The answer, as it turns out, might be both.

The Architecture of Desire

Soulfun markets itself with uncommon directness. Unlike the carefully hedged language surrounding mainstream AI assistants, the platform's promotional materials lean into what it offers: “NSFW Chat,” “AI girls across different backgrounds,” and conversations that feel “alive, responsive, and willing to dive into adult conversations without that robotic hesitation.” The platform's unique large language model can, according to its developers, “bypass standard LLM filters,” allowing personalised NSFW AI chats tailored to individual interests.

Lovechat follows a similar philosophy, positioning itself as “an uncensored AI companion platform built for people who want more than small talk.” The platform extends beyond text into uncensored image generation, giving users what it describes as “the chance to visualise fantasies from roleplay chats.” Both platforms charge subscription fees for access to their services, with Soulfun having notably reduced free offerings to push users towards paid tiers.

The technology underlying these platforms is sophisticated. They leverage advanced language models capable of natural, contextually aware dialogue whilst employing image generation systems that can produce realistic visualisations. The critical difference between these services and their mainstream counterparts lies not in the underlying technology but in the deliberate removal of content guardrails that companies like OpenAI, Anthropic, and Google have spent considerable resources implementing.

This architectural choice, removing the safety barriers that prevent AI from generating certain types of content, is precisely what makes these platforms simultaneously appealing to their users and alarming to their critics.

The same system that allows consensual adults to explore fantasies without judgement also enables the creation of non-consensual intimate imagery of real people, a capability with documented and devastating consequences. This duality is not accidental. It is inherent to the architecture itself. When you build a system designed to say “yes” to any request, you cannot selectively prevent it from saying “yes” to harmful ones without reintroducing the filters you promised to remove.

The Case for Unfiltered Expression

The defence of unfiltered AI rests on several interconnected arguments about freedom, marginalisation, and the limits of paternalistic technology design. These arguments deserve serious consideration, not least because they emerge from communities with legitimate grievances about how mainstream platforms treat their speech.

Research from Carnegie Mellon University in June 2024 revealed a troubling pattern: AI image generators' content protocols frequently identify material by or for LGBTQ+ individuals as harmful or inappropriate, often flagging outputs as explicit imagery inconsistently and with little regard for context. This represents, as the researchers described it, “wholesale erasure of content without considering cultural significance,” a persistent problem that has plagued content moderation algorithms across social media platforms.

The data supporting these concerns is substantial. A 2024 study presented at the ACM Conference on Fairness, Accountability and Transparency found that automated content moderation restricts ChatGPT from producing content that has already been permitted and widely viewed on television.

The researchers tested actual scripts from popular television programmes. ChatGPT flagged nearly 70 per cent of them, including half of those from PG-rated shows. This overcautious approach, whilst perhaps understandable from a legal liability perspective, effectively censors stories and artistic expression that society has already deemed acceptable.

The problem intensifies when examining how AI systems handle reclaimed language and culturally specific expression. Research from Emory University highlighted how LGBTQ+ communities have reclaimed certain words that might be considered offensive in other contexts. Terms like “queer” function within the community both in jest and as markers of identity and belonging. Yet when AI systems lack contextual awareness, they make oversimplified judgements, flagging content for moderation without understanding whether the speaker belongs to the group being referenced or the cultural meaning embedded in the usage.

Penn Engineering research illuminated what they termed “the dual harm problem.” The groups most likely to be hurt by hate speech that might emerge from an unfiltered language model are the same groups harmed by over-moderation that restricts AI from discussing certain marginalised identities. This creates an impossible bind: protective measures designed to prevent harm end up silencing the very communities they aim to protect.

GLAAD's 2024 Social Media Safety Index documented this dual problem extensively, noting that whilst anti-LGBTQ content proliferates on major platforms, legitimate LGBTQ accounts and content are wrongfully removed, demonetised, or shadowbanned. The report highlighted that platforms like TikTok, X (formerly Twitter), YouTube, Instagram, Facebook, and Threads consistently receive failing grades on protecting LGBTQ users.

Over-moderation took down hashtags containing phrases such as “queer,” “trans,” and “non-binary.” One LGBTQ+ creator reported in the survey that simply identifying as transgender was considered “sexual content” on certain platforms.

Sex workers face perhaps the most acute version of these challenges. They report suffering from platform censorship (so-called de-platforming), financial discrimination (de-banking), and having their content stolen and monetised by third parties. Algorithmic content moderation is deployed to censor and erase sex workers, with shadow bans reducing visibility and income.

In late 2024, WishTender, a popular wishlist platform for sex workers and online creators, faced disruption when Stripe unexpectedly withdrew support due to a policy shift. AI algorithms are increasingly deployed to automatically exclude anything remotely connected to the adult industry from financial services, resulting in frozen or closed accounts and sometimes confiscated funds.

The irony, as critics note, is stark. Human sex workers are banned from platforms whilst AI-generated sexual content runs advertisements on social media. Payment processors that restrict adult creators allow AI services to generate explicit content of real people for subscription fees. This double standard, where synthetic sexuality is permitted but human sexuality is punished, reveals uncomfortable truths about whose expression gets protected and whose gets suppressed.

Proponents of unfiltered AI argue that outright banning AI sexual content would be an overreach that might censor sex-positive art or legitimate creative endeavours. Provided all involved are consenting adults, they contend, people should have the freedom to create and consume sexual content of their choosing, whether AI-assisted or not. This libertarian perspective suggests punishing actual harm, such as non-consensual usage, rather than criminalising the tool or consensual fantasy.

Some sex workers have even begun creating their own AI chatbots to fight back and grow their businesses, with AI-powered digital clones earning income when the human is off-duty, on sick leave, or retired. This represents creative adaptation to technological change, leveraging the same systems that threaten their livelihoods.

These arguments collectively paint unfiltered AI as a necessary correction to overcautious moderation, a sanctuary for marginalised expression, and a space where adults can explore aspects of human experience that make corporate content moderators uncomfortable. The case is compelling, grounded in documented harms from over-moderation and legitimate concerns about technological paternalism.

But it exists alongside a dramatically different reality, one measured in violated consent and psychological devastation.

The Architecture of Harm

The statistics are stark. In a survey of over 16,000 respondents across 10 countries, 2.2 per cent indicated personal victimisation from deepfake pornography, and 1.8 per cent indicated perpetration behaviours. These percentages, whilst seemingly small, represent hundreds of thousands of individuals when extrapolated to global internet populations.

The victimisation is not evenly distributed. A 2023 study showed that 98 per cent of deepfake videos online are pornographic, and a staggering 99 per cent of those target women. According to Sensity, an AI-developed synthetic media monitoring company, 96 per cent of deepfakes are sexually explicit and feature women who did not consent to the content's creation.

Ninety-four per cent of individuals featured in deepfake pornography work in the entertainment industry, with celebrities being prime targets. Yet the technology's democratisation means anyone with publicly available photographs faces potential victimisation.

The harms of image-based sexual abuse have been extensively documented: negative impacts on victim-survivors' mental health, career prospects, and willingness to engage with others both online and offline. Victims are likely to experience poor mental health symptoms including depression and anxiety, reputational damage, withdrawal from areas of their public life, and potential loss of jobs and job prospects.

The use of deepfake technology, as researchers describe it, “invades privacy and inflicts profound psychological harm on victims, damages reputations, and contributes to a culture of sexual violence.” This is not theoretical harm. It is measurable, documented, and increasingly widespread as the tools for creating such content become more accessible.

The platforms offering unfiltered AI capabilities claim various safeguards. Lovechat emphasises that it has “a clearly defined Privacy Policy and Terms of Use.” Yet the fundamental challenge remains: systems designed to remove barriers to AI-generated sexual content cannot simultaneously prevent those same systems from being weaponised against non-consenting individuals.

The technical architecture that enables fantasy exploration also enables violation. This is not a bug that can be patched. It is a feature of the design philosophy itself.

The National Center on Sexual Exploitation warned in a 2024 report that even “ethical” generation of NSFW material from chatbots posed major harms, including addiction, desensitisation, and a potential increase in sexual violence. Critics warn that these systems are data-harvesting tools designed to maximise user engagement rather than genuine connection, potentially fostering emotional dependency, attachment, and distorted expectations of real relationships.

Unrestricted AI-generated NSFW material, researchers note, poses significant risks extending beyond individual harms into broader societal effects. Such content can inadvertently promote harmful stereotypes, objectification, and unrealistic standards, affecting individuals' mental health and societal perceptions of consent. Allowing explicit content may democratise creative expression but risks normalising harmful behaviours, blurring ethical lines, and enabling exploitation.

The scale of AI-generated content compounds these concerns. According to a report from Europol Innovation Lab, as much as 90 per cent of online content may be synthetically generated by 2026. This represents a fundamental shift in the information ecosystem, one where distinguishing between authentic human expression and algorithmically generated content becomes increasingly difficult.

When Law Cannot Keep Pace

Technology continues to outpace legal frameworks, with AI's rapid progress leaving lawmakers struggling to respond. As one regulatory analysis put it, “AI's rapid evolution has outpaced regulatory frameworks, creating challenges for policymakers worldwide.”

Yet 2024 and 2025 have witnessed an unprecedented surge in legislative activity attempting to address these challenges. The responses reveal both the seriousness with which governments are treating AI harms and the difficulties inherent in regulating technologies that evolve faster than legislation can be drafted.

In the United States, the TAKE IT DOWN Act was signed into law on 19 May 2025, criminalising the knowing publication or threat to publish non-consensual intimate imagery, including AI-generated deepfakes. Platforms must remove such content within 48 hours upon notice, with penalties including fines and up to three years in prison.

The DEFIANCE Act was reintroduced in May 2025, giving victims of non-consensual sexual deepfakes a federal civil cause of action with statutory damages up to $250,000.

At the state level, 14 states have enacted laws addressing non-consensual sexual deepfakes. Tennessee's ELVIS Act, effective 1 July 2024, provides civil remedies for unauthorised use of a person's voice or likeness in AI-generated content. New York's Hinchey law, enacted in 2023, makes creating or sharing sexually explicit deepfakes of real people without their consent a crime whilst giving victims the right to sue.

The European Union's Artificial Intelligence Act officially entered into force in August 2024, becoming a significant and pioneering regulatory framework. The Act adopts a risk-based approach, outlawing the worst cases of AI-based identity manipulation and mandating transparency for AI-generated content. Directive 2024/1385 on combating violence against women and domestic violence addresses non-consensual images generated with AI, providing victims with protection from deepfakes.

France amended its Penal Code in 2024 with Article 226-8-1, criminalising non-consensual sexual deepfakes with possible penalties including up to two years' imprisonment and a €60,000 fine.

The United Kingdom's Online Safety Act 2023 prohibits the sharing or even the threat of sharing intimate deepfake images without consent. Proposed 2025 amendments target creators directly, with intentionally crafting sexually explicit deepfake images without consent penalised with up to two years in prison.

China is proactively regulating deepfake technology, requiring the labelling of synthetic media and enforcing rules to prevent the spread of misleading information. The global response demonstrates a trend towards protecting individuals from non-consensual AI-generated content through both criminal penalties and civil remedies.

But respondents from countries with specific legislation still reported perpetration and victimisation experiences in the survey data, suggesting that laws alone are inadequate to deter perpetration. The challenge is not merely legislative but technological, cultural, and architectural.

Laws can criminalise harm after it occurs and provide mechanisms for content removal, but they struggle to prevent creation in the first place when the tools are widely distributed, easy to use, and operate across jurisdictional boundaries.

The global AI regulation landscape is, as analysts describe it, “fragmented and rapidly evolving,” with earlier optimism about global cooperation now seeming distant. In 2024, US lawmakers introduced more than 700 AI-related bills, and 2025 began at an even faster pace. Yet existing frameworks fall short beyond traditional data practices, leaving critical gaps in addressing the unique challenges AI poses.

UNESCO's 2021 Recommendation on AI Ethics and the OECD's 2019 AI Principles established common values like transparency and fairness. The Council of Europe Framework Convention on Artificial Intelligence aims to ensure AI systems respect human rights, democracy, and the rule of law. These aspirational frameworks provide guidance but lack enforcement mechanisms, making them more statement of intent than binding constraint.

The law, in short, is running to catch up with technology that has already escaped the laboratory and pervaded the consumer marketplace. Each legislative response addresses yesterday's problems whilst tomorrow's capabilities are already being developed.

The Impossible Question of Responsibility

When AI-generated content causes harm, who bears responsibility? The question appears straightforward but dissolves into complexity upon examination.

Algorithmic accountability refers to the allocation of responsibility for the consequences of real-world actions influenced by algorithms used in decision-making processes. Five key elements have been identified: the responsible actors, the forum to whom the account is directed, the relationship of accountability between stakeholders and the forum, the criteria to be fulfilled to reach sufficient account, and the consequences for the accountable parties.

In theory, responsibility for any harm resulting from a machine's decision may lie with the algorithm itself or with the individuals who designed it, particularly if the decision resulted from bias or flawed data analysis inherent in the algorithm's design. But research shows that practitioners involved in designing, developing, or deploying algorithmic systems feel a diminished sense of responsibility, often shifting responsibility for the harmful effects of their own software code to other agents, typically the end user.

This responsibility diffusion creates what might be called the “accountability gap.” The platform argues it merely provides tools, not content. The model developers argue they created general-purpose systems, not specific harmful outputs. The users argue the AI generated the content, not them. The AI, being non-sentient, cannot be held morally responsible in any meaningful sense.

Each party points to another. The circle of deflection closes, and accountability vanishes into the architecture.

The Algorithmic Accountability Act requires some businesses that use automated decision systems to make critical decisions to report on the impact of such systems on consumers. Yet concrete strategies for AI practitioners remain underdeveloped, with ongoing challenges around transparency, enforcement, and determining clear lines of accountability.

The challenge intensifies with unfiltered AI platforms. When a user employs Soulfun or Lovechat to generate non-consensual intimate imagery of a real person, multiple parties share causal responsibility. The platform created the infrastructure and removed safety barriers. The model developers trained systems capable of generating realistic imagery. The user made the specific request and potentially distributed the harmful content.

Each party enabled the harm, yet traditional legal frameworks struggle to apportion responsibility across distributed, international, and technologically mediated actors.

Some argue that AI systems cannot be authors because authorship implies responsibility and agency, and that ethical AI practice requires humans remain fully accountable for AI-generated works. This places ultimate responsibility on the human user making requests, treating AI as a tool comparable to Photoshop or any other creative software.

Yet this framing fails to account for the qualitative differences AI introduces. Previous manipulation tools required skill, time, and effort. Creating a convincing fake photograph demanded technical expertise. AI dramatically lowers these barriers, enabling anyone to create highly realistic synthetic content with minimal effort or technical knowledge. The democratisation of capability fundamentally alters the risk landscape.

Moreover, the scale of potential harm differs. A single deepfake can be infinitely replicated, distributed globally within hours, and persist online despite takedown efforts. The architecture of the internet, combined with AI's generative capabilities, creates harm potential that traditional frameworks for understanding responsibility were never designed to address.

Who bears responsibility when the line between liberating art and undeniable harm is generated not by human hands but by a perfectly amoral algorithm? The question assumes a clear line exists. Perhaps the more uncomfortable truth is that these systems have blurred boundaries to the point where liberation and harm are not opposites but entangled possibilities within the same technological architecture.

The Marginalised Middle Ground

The conflict between creative freedom and protection from harm is not new. Societies have long grappled with where to draw lines around expression, particularly sexual expression. What makes the AI context distinctive is the compression of timescales, the globalisation of consequences, and the technical complexity that places meaningful engagement beyond most citizens' expertise.

Lost in the polarised debate between absolute freedom and absolute restriction is the nuanced reality that most affected communities occupy. LGBTQ+ individuals simultaneously need protection from AI-generated harassment and deepfakes whilst also requiring freedom from over-moderation that erases their identities. Sex workers need platforms that do not censor their labour whilst also needing protection from having their likenesses appropriated by AI systems without consent or compensation.

The GLAAD 2024 Social Media Safety Index recommended that AI systems should be used to flag content for human review rather than automated removals. They called for strengthening and enforcing existing policies that protect LGBTQ people from both hate and suppression of legitimate expression, improving moderation including training moderators on the needs of LGBTQ users, and not being overly reliant on AI.

This points towards a middle path, one that neither demands unfiltered AI nor accepts the crude over-moderation that currently characterises mainstream platforms. Such a path requires significant investment in context-aware moderation, human review at scale, and genuine engagement with affected communities about their needs. It demands that platforms move beyond simply maximising engagement or minimising liability towards actually serving users' interests.

But this middle path faces formidable obstacles. Human review at the scale of modern platforms is extraordinarily expensive. Context-aware AI moderation is technically challenging and, as current systems demonstrate, frequently fails. Genuine community engagement takes time and yields messy, sometimes contradictory results that do not easily translate into clear policy.

The economic incentives point away from nuanced solutions. Unfiltered AI platforms can charge subscription fees whilst avoiding the costs of sophisticated moderation. Mainstream platforms can deploy blunt automated moderation that protects against legal liability whilst externalising the costs of over-censorship onto marginalised users.

Neither model incentivises the difficult, expensive, human-centred work that genuinely protective and permissive systems would require. The market rewards extremes, not nuance.

Designing Different Futures

Technology is not destiny. The current landscape of unfiltered AI platforms and over-moderated mainstream alternatives is not inevitable but rather the result of specific architectural choices, business models, and regulatory environments. Different choices could yield different outcomes.

Several concrete proposals emerge from the research and advocacy communities. Incorporating algorithmic accountability systems with real-time feedback loops could ensure that biases are swiftly detected and mitigated, keeping AI both effective and ethically compliant over time.

Transparency about the use of AI in content creation, combined with clear processes for reviewing, approving, and authenticating AI-generated content, could help establish accountability chains. Those who leverage AI to generate content would be held responsible through these processes rather than being able to hide behind algorithmic opacity.

Technical solutions also emerge. Robust deepfake detection systems could identify synthetic content, though this becomes an arms race as generation systems improve. Watermarking and provenance tracking for AI-generated content could enable verification of authenticity. The EU AI Act's transparency requirements, mandating disclosure of AI-generated content, represent a regulatory approach to this technical challenge.

Some researchers propose that ethical and safe training ensures NSFW AI chatbots are developed using filtered, compliant datasets that prevent harmful or abusive outputs, balancing realism with safety to protect both users and businesses. Yet this immediately confronts the question of who determines what constitutes “harmful or abusive” and whether such determinations will replicate the over-moderation problems already documented.

Policy interventions focusing on regulations against false information and promoting transparent AI systems are essential for addressing AI's social and economic impacts. But policy alone cannot solve problems rooted in fundamental design choices and economic incentives.

Yet perhaps the most important shift required is cultural rather than technical or legal. As long as society treats sexual expression as uniquely dangerous, subject to restrictions that other forms of expression escape, we will continue generating systems that either over-censor or refuse to censor at all. As long as marginalised communities' sexuality is treated as more threatening than mainstream sexuality, moderation systems will continue reflecting and amplifying these biases.

The question “what should AI be allowed to do?” is inseparable from “what should humans be allowed to do?” If we believe adults should be able to create and consume sexual content consensually, then AI tools for doing so are not inherently problematic. If we believe non-consensual sexual imagery violates fundamental rights, then preventing AI from enabling such violations becomes imperative.

The technology amplifies and accelerates human capabilities, for creation and for harm, but it does not invent the underlying tensions. It merely makes them impossible to ignore.

The Future We're Already Building

As much as 90 per cent of online content may be synthetically generated by 2026, according to Europol Innovation Lab projections. This represents a fundamental transformation of the information environment humans inhabit, one we are building without clear agreement on its rules, ethics, or governance.

The platforms offering unfiltered AI represent one possible future: a libertarian vision where adults access whatever tools and content they desire, with harm addressed through after-the-fact legal consequences rather than preventive restrictions. The over-moderated mainstream platforms represent another: a cautious approach that prioritises avoiding liability and controversy over serving users' expressive needs.

Both futures have significant problems. Neither is inevitable.

The challenge moving forward, as one analysis put it, “will be maximising the benefits (creative freedom, private enjoyment, industry innovation) whilst minimising the harms (non-consensual exploitation, misinformation, displacement of workers).” This requires moving beyond polarised debates towards genuine engagement with the complicated realities that affected communities navigate.

It requires acknowledging that unfiltered AI can simultaneously be a sanctuary for marginalised expression and a weapon for violating consent. That the same technical capabilities enabling creative freedom also enable unprecedented harm. That removing all restrictions creates problems and that imposing crude restrictions creates different but equally serious problems.

Perhaps most fundamentally, it requires accepting that we cannot outsource these decisions to technology. The algorithm is amoral, as the opening question suggests, but its creation and deployment are profoundly moral acts.

The platforms offering unfiltered AI made choices about what to build and how to monetise it. The mainstream platforms made choices about what to censor and how aggressively. Regulators make choices about what to permit and prohibit. Users make choices about what to create and share.

At each decision point, humans exercise agency and bear responsibility. The AI may generate the content, but humans built the AI, designed its training process, chose its deployment context, prompted its outputs, and decided whether to share them. The appearance of algorithmic automaticity obscures human choices all the way down.

As we grant artificial intelligence the deepest access to our imaginations and desires, we are not witnessing a final frontier of creative emancipation or engineering a Pandora's box of ungovernable consequences. We are doing both, simultaneously, through technologies that amplify human capabilities for creation and destruction alike.

The unfiltered AI embodied by platforms like Soulfun and Lovechat is neither purely vital sanctuary nor mere convenient veil. It is infrastructure that enables both authentic self-expression and non-consensual violation, both community building and exploitation.

The same could be said of the internet itself, or photography, or written language. Technologies afford possibilities; humans determine how those possibilities are actualised.

As these tools rapidly outpace legal frameworks and moral intuition, the question of responsibility becomes urgent. The answer cannot be that nobody is responsible because the algorithm generated the output. It must be that everyone in the causal chain bears some measure of responsibility, proportionate to their power and role.

Platform operators who remove safety barriers. Developers who train increasingly capable generative systems. Users who create harmful content. Regulators who fail to establish adequate guardrails. Society that demands both perfect safety and absolute freedom whilst offering resources for neither.

The line between liberating art and undeniable harm has never been clear or stable. What AI has done is make that ambiguity impossible to ignore, forcing confrontation with questions about expression, consent, identity, and power that we might prefer to avoid.

The algorithm is amoral, but our decisions about it cannot be. We are building the future of human expression and exploitation with each architectural choice, each policy decision, each prompt entered into an unfiltered chat window.

The question is not whether AI represents emancipation or catastrophe, but rather which version of this technology we choose to build, deploy, and live with. That choice remains, for now, undeniably human.


Sources and References

ACM Conference on Fairness, Accountability and Transparency. (2024). Research on automated content moderation restricting ChatGPT outputs. https://dl.acm.org/conference/fat

Carnegie Mellon University. (June 2024). “How Should AI Depict Marginalized Communities? CMU Technologists Look to a More Inclusive Future.” https://www.cmu.edu/news/

Council of Europe Framework Convention on Artificial Intelligence. (2024). https://www.coe.int/

Dentons. (January 2025). “AI trends for 2025: AI regulation, governance and ethics.” https://www.dentons.com/

Emory University. (2024). Research on LGBTQ+ reclaimed language and AI moderation. “Is AI Censoring Us?” https://goizueta.emory.edu/

European Union. (1 August 2024). EU Artificial Intelligence Act. https://eur-lex.europa.eu/

European Union. (2024). Directive 2024/1385 on combating violence against women and domestic violence.

Europol Innovation Lab. (2024). Report on synthetic content generation projections.

France. (2024). Penal Code Article 226-8-1 on non-consensual sexual deepfakes.

GLAAD. (2024). Social Media Safety Index: Executive Summary. https://glaad.org/smsi/2024/

National Center on Sexual Exploitation. (2024). Report on NSFW AI chatbot harms.

OECD. (2019). AI Principles. https://www.oecd.org/

Penn Engineering. (2024). “Censoring Creativity: The Limits of ChatGPT for Scriptwriting.” https://blog.seas.upenn.edu/

Sensity. (2023). Research on deepfake content and gender distribution.

Springer. (2024). “Accountability in artificial intelligence: what it is and how it works.” AI & Society. https://link.springer.com/

Survey research. (2024). “Non-Consensual Synthetic Intimate Imagery: Prevalence, Attitudes, and Knowledge in 10 Countries.” ACM Digital Library. https://dl.acm.org/doi/fullHtml/10.1145/3613904.3642382

Tennessee. (1 July 2024). ELVIS Act.

UNESCO. (2021). Recommendation on AI Ethics. https://www.unesco.org/

United Kingdom. (2023). Online Safety Act. https://www.legislation.gov.uk/

United States Congress. (19 May 2025). TAKE IT DOWN Act.

United States Congress. (May 2025). DEFIANCE Act.


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...