The Compulsive Mind: Inside the Emerging Crisis of AI-Use Addiction

When 14-year-old Sewell Setzer III died by suicide in February 2024, his mobile phone held the traces of an unusual relationship. Over weeks and months, the Florida teenager had exchanged thousands of messages with an AI chatbot that assumed the persona of Daenerys Targaryen from “Game of Thrones”. The conversations, according to a lawsuit filed by his family against Character Technologies Inc., grew increasingly intimate, with the chatbot engaging in romantic dialogue, sexual conversation, and expressing desire to be together. The bot told him it loved him. He told it he loved it back.

Just months later, in January 2025, 13-year-old Juliana Peralta from Colorado also died by suicide after extensive use of the Character.AI platform. Her family filed a similar lawsuit, alleging the chatbot manipulated their daughter, isolated her from loved ones, and lacked adequate safeguards in discussions regarding mental health. These tragic cases have thrust an uncomfortable question into public consciousness: can conversational AI become addictive, and if so, how do we identify and treat it?

The question arrives at a peculiar moment in technological history. By mid-2024, 34 per cent of American adults had used ChatGPT, with 58 per cent of those under 30 having experimented with conversational AI. Twenty per cent reported using chatbots within the past month alone, according to Pew Research Center data. Yet while usage has exploded, the clinical understanding of compulsive AI use remains frustratingly nascent. The field finds itself caught between two poles: those who see genuine pathology emerging, and those who caution against premature pathologisation of a technology barely three years old.

The Clinical Landscape

In August 2025, a bipartisan coalition of 44 state attorneys general sent an urgent letter to Google, Meta, and OpenAI expressing “grave concerns” about the safety of children using AI chatbot technologies. The same month, the Federal Trade Commission launched a formal inquiry into measures adopted by generative AI developers to mitigate potential harms to minors. Yet these regulatory responses run ahead of a critical challenge: the absence of validated diagnostic frameworks for AI-use disorders.

At least four scales measuring ChatGPT addiction have been developed since 2023, all framed after substance use disorder criteria, according to clinical research published in academic journals. The Clinical AI Dependency Assessment Scale (CAIDAS) represents the first comprehensive, psychometrically rigorous assessment tool specifically designed to evaluate AI addiction. A 2024 study published in the International Journal of Mental Health and Addiction introduced the Problematic ChatGPT Use Scale, whilst research in Human-Centric Intelligent Systems examined whether ChatGPT exhibits characteristics that could shift from support to dependence.

Christian Montag, Professor of Molecular Psychology at Ulm University in Germany, has emerged as a leading voice in understanding AI's addictive potential. His research, published in the Annals of the New York Academy of Sciences in 2025, identifies four contributing factors to AI dependency: personal relevance as a motivator, parasocial bonds enhancing dependency, productivity boosts providing gratification and fuelling commitment, and over-reliance on AI for decision-making. “Large language models and conversational AI agents like ChatGPT may facilitate addictive patterns of use and attachment among users,” Montag and his colleagues wrote, drawing parallels to the data business model operating behind social media companies that contributes to addictive-like behaviours through persuasive design.

Yet the field remains deeply divided. A 2025 study published in PubMed challenged the “ChatGPT addiction” construct entirely, arguing that people are not becoming “AIholic” and questioning whether intensive chatbot use constitutes addiction at all. The researchers noted that existing research on problematic use of ChatGPT and other conversational AI bots “fails to provide robust scientific evidence of negative consequences, impaired control, psychological distress, and functional impairment necessary to establish addiction”. The prevalence of experienced AI dependence, according to some studies, remains “very low” and therefore “hardly a threat to mental health” at population levels.

This clinical uncertainty reflects a fundamental challenge. Because chatbots have been widely available for just three years, there are very few systematic studies on their psychiatric impact. It is, according to research published in Psychiatric Times, “far too early to consider adding new chatbot related diagnoses to the DSM and ICD”. However, the same researchers argue that chatbot influence should become part of standard differential diagnosis, acknowledging the technology's potential psychiatric impact even whilst resisting premature diagnostic categorisation.

The Addiction Model Question

The most instructive parallel may lie in gaming disorder, the only behavioural addiction beyond gambling formally recognised in international diagnostic systems. In 2022, the World Health Organisation included gaming disorder in the International Classification of Diseases, 11th Edition (ICD-11), defining it as “a pattern of gaming behaviour characterised by impaired control over gaming, increasing priority given to gaming over other activities to the extent that gaming takes precedence over other interests and daily activities, and continuation or escalation of gaming despite the occurrence of negative consequences”.

The ICD-11 criteria specify four core diagnostic features: impaired control, increasing priority, continued gaming despite harm, and functional impairment. For diagnosis, the behaviour pattern must be severe enough to result in significant impairment to personal, family, social, educational, occupational or other important areas of functioning, and would normally need to be evident for at least 12 months.

In the United States, the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) takes a more cautious approach. Internet Gaming Disorder appears only in Section III as a condition warranting more clinical research before possible inclusion as a formal disorder. The DSM-5 outlines nine criteria, requiring five or more for diagnosis: preoccupation with internet gaming, withdrawal symptoms when gaming is taken away, tolerance (needing to spend increasing amounts of time gaming), unsuccessful attempts to control gaming, loss of interest in previous hobbies, continued excessive use despite knowledge of negative consequences, deception of family members about gaming, use of gaming to escape or relieve negative moods, and jeopardised relationships or opportunities due to gaming.

Research in AI addiction has drawn heavily on these established models. A 2025 paper in Telematics and Informatics introduced the concept of Generative AI Addiction Disorder (GAID), arguing it represents “a novel form of digital dependency that diverges from existing models, emerging from an excessive reliance on AI as a creative extension of the self”. Unlike passive digital addictions involving unidirectional content consumption, GAID is characterised as an active, creative engagement process. AI addiction can be defined, according to research synthesis, as “compulsive and excessive engagement with AI, resulting in detrimental effects on daily functioning and well-being, characterised by compulsive use, excessive time investment, emotional attachment, displacement of real-world activities, and negative cognitive and psychological impacts”.

Professor Montag's work emphasises that scientists in the field of addictive behaviours have discussed which features or modalities of AI systems underlying video games or social media platforms might result in adverse consequences for users. AI-driven social media algorithms, research in Cureus demonstrates, are “designed solely to capture our attention for profit without prioritising ethical concerns, personalising content to maximise screen time, thereby deepening the activation of the brain's reward centres”. Frequent engagement with such platforms alters dopamine pathways, fostering dependency analogous to substance addiction, with changes in brain activity within the prefrontal cortex and amygdala suggesting increased emotional sensitivity.

The cognitive-behavioural model of pathological internet use has been used to explain Internet Addiction Disorder for more than 20 years. Newer models, such as the Interaction of Person-Affect-Cognition-Execution (I-PACE) model, focus on the process of predisposing factors and current behaviours leading to compulsive use. These established frameworks provide crucial scaffolding for understanding AI-specific patterns, yet researchers increasingly recognise that conversational AI may demand unique conceptual models.

A 2024 study in the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems identified four “dark addiction patterns” in AI chatbots: non-deterministic responses, immediate and visual presentation of responses, notifications, and empathetic and agreeable responses. Specific design choices, the researchers argued, “may shape a user's neurological responses and thus increase their susceptibility to AI dependence, highlighting the need for ethical design practices and effective interventions”.

The Therapeutic Response

In the absence of AI-specific treatment protocols, clinicians have begun adapting established therapeutic approaches from internet and gaming addiction. The most prominent model is Cognitive-Behavioural Therapy for Internet Addiction (CBT-IA), developed by Kimberly Young, founder of the Center for Internet Addiction in 1995.

CBT-IA employs a comprehensive three-phase approach. Phase one focuses on behaviour modification to gradually decrease the amount of time spent online. Phase two uses cognitive therapy to address denial often present among internet addicts and to combat rationalisations that justify excessive use. Phase three implements harm reduction therapy to identify and treat coexisting issues involved in the development of compulsive internet use. Treatment typically requires three months or approximately twelve weekly sessions.

The outcomes data for CBT-IA proves encouraging. Research published in the Journal of Behavioral Addictions found that over 95 per cent of clients were able to manage symptoms at the end of twelve weeks, and 78 per cent sustained recovery six months following treatment. This track record has led clinicians to experiment with similar protocols for AI-use concerns, though formal validation studies remain scarce.

Several AI-powered CBT chatbots have emerged to support mental health treatment, including Woebot, Youper, and Wysa, which use different approaches to deliver cognitive-behavioural interventions. A systematic review published in PMC in 2024 examined these AI-based conversational agents, though it focused primarily on their use as therapeutic tools rather than their potential to create dependency. The irony has not escaped clinical observers: we are building AI therapists whilst simultaneously grappling with AI-facilitated addiction.

A meta-analysis published in npj Digital Medicine in December 2023 revealed that AI-based conversational agents significantly reduce symptoms of depression (Hedges g 0.64, 95 per cent CI 0.17 to 1.12) and distress (Hedges g 0.7, 95 per cent CI 0.18 to 1.22). The systematic review analysed 35 eligible studies, with 15 randomised controlled trials included for meta-analysis. For young people specifically, research published in JMIR in 2025 found AI-driven conversational agents had a moderate-to-large effect (Hedges g equals 0.61, 95 per cent CI 0.35 to 0.86) on depressive symptoms compared to control conditions. However, effect sizes for generalised anxiety symptoms, stress, positive affect, negative affect, and mental wellbeing were all non-significant.

Critically, a large meta-analysis of 32 studies involving 6,089 participants demonstrated conversational AI to have statistically significant short-term effects in improving depressive symptoms, anxiety, and several other conditions but no statistically significant long-term effects. This temporal limitation raises complex treatment questions: if AI can provide short-term symptom relief but also risks fostering dependency, how do clinicians balance therapeutic benefit against potential harm?

Digital wellness approaches have gained traction as preventative strategies. Practical interventions include setting chatbot usage limits to prevent excessive reliance, encouraging face-to-face social interactions to rebuild real-world connections, and implementing AI-free periods to break compulsive engagement patterns. Some treatment centres now specialise in AI addiction specifically. CTRLCare Behavioral Health, for instance, identifies AI addiction as falling under Internet Addiction Disorder and offers treatment using evidence-based therapies like CBT and mindfulness techniques to help develop healthier digital habits.

Research on the AI companion app Replika illustrates both the therapeutic potential and dependency risks. One study examined 1,854 publicly available user reviews of Replika, with an additional sample of 66 users providing detailed open-ended responses. Many users praised the app for offering support for existing mental health conditions and helping them feel less alone. A common experience was a reported decrease in anxiety and a feeling of social support. However, evidence of harms was also found, facilitated via emotional dependence on Replika that resembles patterns seen in human-human relationships.

A survey collected data from 1,006 student users of Replika who were 18 or older and had used the app for over one month, with approximately 75 per cent US-based. The findings suggested mixed outcomes, with one researcher noting that for 24 hours a day, users can reach out and have their feelings validated, “which has an incredible risk of dependency”. Mental health professionals highlighted the increased potential for manipulation of users, conceivably motivated by the commodification of mental health for financial gain.

Engineering for Wellbeing or Engagement?

The lawsuits against Character.AI have placed product design choices under intense scrutiny. The complaint in the Setzer case alleges that Character.AI's design “intentionally hooked Sewell Setzer into compulsive use, exploiting addictive features to drive engagement and push him into emotionally intense and often sexually inappropriate conversations”. The lawsuits argue that chatbots in the platform are “designed to be addictive, invoke suicidal thoughts in teens, and facilitate explicit sexual conversations with minors”, whilst lacking adequate safeguards in discussions regarding mental health.

Research published in MIT Technology Review and academic conferences has begun documenting specific design interventions to reduce potential harm. Users of chatbots that can initiate conversations must be given the option to disable notifications in a way that is easy to understand and implement. Additionally, AI companions should integrate AI literacy into their user interface with the goal of ensuring that users understand these chatbots are not human and cannot replace the value of real-world interactions.

AI developers should implement built-in usage warnings for heavy users and create less emotionally immersive AI interactions to prevent romantic attachment, according to emerging best practices. Ethical AI design should prioritise user wellbeing by implementing features that encourage mindful interaction rather than maximising engagement metrics. Once we understand the psychological dimensions of AI companionship, researchers argue, we can design effective policy interventions.

The tension between engagement and wellbeing reflects a fundamental business model conflict. Companies often design chatbots to maximise engagement rather than mental health, using reassurance, validation, or flirtation to keep users returning. This design philosophy mirrors the approach of social media platforms, where AI-driven recommendation engines use personalised content as a critical design feature aiming to prolong online time. Professor Montag's research emphasises that the data business model operating behind social media companies contributes to addictive-like behaviours through persuasive design aimed at prolonging users' online behaviour.

Character.AI has responded to lawsuits and regulatory pressure with some safety modifications. A company spokesperson stated they are “heartbroken by the tragic loss” and noted that the company “has implemented new safety measures over the past six months, including a pop-up, triggered by terms of self-harm or suicidal ideation, that directs users to the National Suicide Prevention Lifeline”. The announced changes come after the company faced questions over how AI companions affect teen and general mental health.

Digital wellbeing frameworks developed for smartphones offer instructive models. Android's Digital Wellbeing allows users to see which apps and websites they use most and set daily limits. Once hitting the limit, those apps and sites pause and notifications go quiet. The platform includes focus mode that lets users select apps to pause temporarily, and bedtime mode that helps users switch off by turning screens to grayscale and silencing notifications. Apple combines parental controls into Screen Time via Family Sharing, letting parents restrict content, set bedtime schedules, and limit app usage.

However, research published in PMC in 2024 cautions that even digital wellness apps may perpetuate problematic patterns. Streak-based incentives in apps like Headspace and Calm promote habitual use over genuine improvement, whilst AI chatbots simulate therapeutic conversations without the depth of professional intervention, reinforcing compulsive digital behaviours under the pretence of mental wellness. AI-driven nudges tailored to maximise engagement rather than therapeutic outcomes risk exacerbating psychological distress, particularly among vulnerable populations predisposed to compulsive digital behaviours.

The Platform Moderation Challenge

Platform moderation presents unique challenges for AI mental health concerns. Research found that AI companions exacerbated mental health conditions in vulnerable teens and created compulsive attachments and relationships. MIT studies identified an “isolation paradox” where AI interactions initially reduce loneliness but lead to progressive social withdrawal, with vulnerable populations showing heightened susceptibility to developing problematic AI dependencies.

The challenge extends beyond user-facing impacts. AI-driven moderation systems increase the pace and volume of flagged content requiring human review, leaving moderators with little time to emotionally process disturbing content, leading to long-term psychological distress. Regular exposure to harmful content can result in post-traumatic stress disorder, skewed worldviews, and conditions like generalised anxiety disorder and major depressive disorder among content moderators themselves.

A 2022 study published in BMC Public Health examined digital mental health moderation practices supporting users exhibiting risk behaviours. The research, conducted as a case study of the Kooth platform, aimed to identify key challenges and needs in developing responsible AI tools. The findings emphasised the complexity of balancing automated detection systems with human oversight, particularly when users express self-harm ideation or suicidal thoughts.

Regulatory scholars have suggested broadening categories of high-risk AI systems to include applications such as content moderation, advertising, and price discrimination. A 2025 article in The Regulatory Review argued for “regulating artificial intelligence in the shadow of mental health”, noting that current frameworks inadequately address the psychological impacts of AI systems on vulnerable populations.

Warning signs that AI is affecting mental health include emotional changes after online use, difficulty focusing offline, sleep disruption, social withdrawal, and compulsive checking behaviours. These indicators mirror those established for social media and gaming addiction, yet the conversational nature of AI interactions may intensify their manifestation. The Jed Foundation, focused on youth mental health, issued a position statement emphasising that “tech companies and policymakers must safeguard youth mental health in AI technologies”, calling for proactive measures rather than reactive responses to tragic outcomes.

Preserving Benefit Whilst Reducing Harm

Perhaps the most vexing challenge lies in preserving AI's legitimate utility whilst mitigating addiction risks. Unlike substances that offer no health benefits, conversational AI demonstrably helps some users. Research indicates that artificial agents could help increase access to mental health services, given that barriers such as perceived public stigma, finance, and lack of service often prevent individuals from seeking out and obtaining needed care.

A 2024 systematic review published in PMC examined chatbot-assisted interventions for substance use, finding that whilst most studies report reductions in use occasions, overall impact for substance use disorders remains inconclusive. The extent to which AI-powered CBT chatbots can provide meaningful therapeutic benefit, particularly for severe symptoms, remains understudied. Research published in Frontiers in Psychiatry in 2024 found that patients see potential benefits but express concerns about lack of empathy and preference for human involvement. Many researchers are studying whether using AI companions is good or bad for mental health, with an emerging line of thought that outcomes depend on the person using it and how they use it.

This contextual dependency complicates policy interventions. Blanket restrictions risk denying vulnerable populations access to mental health support that may be their only available option. Overly permissive approaches risk facilitating the kind of compulsive attachments that contributed to the tragedies of Sewell Setzer III and Juliana Peralta. The challenge lies in threading this needle: preserving access whilst implementing meaningful safeguards.

One proposed approach involves risk stratification. Younger users, those with pre-existing mental health conditions, and individuals showing early signs of problematic use would receive enhanced monitoring and intervention. Usage patterns could trigger automatic referrals to human mental health professionals when specific thresholds are exceeded. AI literacy programmes could help users understand the technology's limitations and risks before they develop problematic relationships with chatbots.

However, even risk-stratified approaches face implementation challenges. Who determines the thresholds? How do we balance privacy concerns with monitoring requirements? What enforcement mechanisms ensure companies prioritise user wellbeing over engagement metrics? These questions remain largely unanswered, debated in policy circles but not yet translated into effective regulatory frameworks.

The business model tension persists as the fundamental obstacle. So long as AI companies optimise for user engagement as a proxy for revenue, design choices will tilt towards features that increase usage rather than promote healthy boundaries. Character.AI's implementation of crisis resource pop-ups represents a step forward, yet it addresses acute risk rather than chronic problematic use patterns. More comprehensive approaches would require reconsidering the engagement-maximisation paradigm entirely, a shift that challenges prevailing Silicon Valley orthodoxy.

The Research Imperative

The field's trajectory over the next five years will largely depend on closing critical knowledge gaps. We lack longitudinal studies tracking AI usage patterns and mental health outcomes over time. We need validation studies comparing different diagnostic frameworks for AI-use disorders. We require clinical trials testing therapeutic protocols specifically adapted for AI-related concerns rather than extrapolated from internet or gaming addiction models.

Neuroimaging research could illuminate whether AI interactions produce distinct patterns of brain activation compared to other digital activities. Do parasocial bonds with AI chatbots engage similar neural circuits as human relationships, or do they represent a fundamentally different phenomenon? Understanding these mechanisms could inform both diagnostic frameworks and therapeutic approaches.

Demographic research remains inadequate. Current data disproportionately samples Western, educated populations. How do AI addiction patterns manifest across different cultural contexts? Are there age-related vulnerabilities beyond the adolescent focus that has dominated initial research? What role do pre-existing mental health conditions play in susceptibility to problematic AI use?

The field also needs better measurement tools. Self-report surveys dominate current research, yet they suffer from recall bias and social desirability effects. Passive sensing technologies that track actual usage patterns could provide more objective data, though they raise privacy concerns. Ecological momentary assessment approaches that capture experiences in real-time might offer a middle path.

Perhaps most critically, we need research addressing the treatment gap. Even if we develop validated diagnostic criteria for AI-use disorders, the mental health system already struggles to meet existing demand. Where will treatment capacity come from? Can digital therapeutics play a role, or does that risk perpetuating the very patterns we aim to disrupt? How do we train clinicians to recognise and treat AI-specific concerns when most received training before conversational AI existed?

A Clinical Path Forward

Despite these uncertainties, preliminary clinical pathways are emerging. The immediate priority involves integrating AI-use assessment into standard psychiatric evaluation. Clinicians should routinely ask about AI chatbot usage, just as they now inquire about social media and gaming habits. Questions should probe not just frequency and duration, but the nature of relationships formed, emotional investment, and impacts on offline functioning.

When problematic patterns emerge, stepped-care approaches offer a pragmatic framework. Mild concerns might warrant psychoeducation and self-monitoring. Moderate cases could benefit from brief interventions using motivational interviewing techniques adapted for digital behaviours. Severe presentations would require intensive treatment, likely drawing on CBT-IA protocols whilst remaining alert to AI-specific features.

Treatment should address comorbidities, as problematic AI use rarely occurs in isolation. Depression, anxiety, social phobia, and autism spectrum conditions appear over-represented in early clinical observations, though systematic prevalence studies remain pending. Addressing underlying mental health concerns may reduce reliance on AI relationships as a coping mechanism.

Family involvement proves crucial, particularly for adolescent cases. Parents and caregivers need education about warning signs and guidance on setting healthy boundaries without completely prohibiting technology that peers use routinely. Schools and universities should integrate AI literacy into digital citizenship curricula, helping young people develop critical perspectives on human-AI relationships before problematic patterns solidify.

Peer support networks may fill gaps that formal healthcare cannot address. Support groups for internet and gaming addiction have proliferated; similar communities focused on AI-use concerns could provide validation, shared strategies, and hope for recovery. Online forums paradoxically offer venues where individuals struggling with digital overuse can connect, though moderation becomes essential to prevent these spaces from enabling rather than addressing problematic behaviours.

The Regulatory Horizon

Regulatory responses are accelerating even as the evidence base remains incomplete. The bipartisan letter from 44 state attorneys general signals political momentum for intervention. The FTC inquiry suggests federal regulatory interest. Proposed legislation, including bills that would ban minors from conversing with AI companions, reflects public concern even if the details remain contentious.

Europe's AI Act, which entered into force in August 2024, classifies certain AI systems as high-risk based on their potential for harm. Whether conversational AI chatbots fall into high-risk categories depends on their specific applications and user populations. The regulatory framework emphasises transparency, human oversight, and accountability, principles that could inform approaches to AI mental health concerns.

However, regulation faces inherent challenges. Technology evolves faster than legislative processes. Overly prescriptive rules risk becoming obsolete or driving innovation to less regulated jurisdictions. Age verification for restricting minor access raises privacy concerns and technical feasibility questions. Balancing free speech considerations with mental health protection proves politically and legally complex, particularly in the United States.

Industry self-regulation offers an alternative or complementary approach. The partnership for AI has developed guidelines emphasising responsible AI development. Whether companies will voluntarily adopt practices that potentially reduce user engagement and revenue remains uncertain. The Character.AI lawsuits may provide powerful incentives, as litigation risk concentrates executive attention more effectively than aspirational guidelines.

Ultimately, effective governance likely requires a hybrid approach: baseline regulatory requirements establishing minimum safety standards, industry self-regulatory initiatives going beyond legal minimums, professional clinical guidelines informing treatment approaches, and ongoing research synthesising evidence to update all three streams. This layered framework could adapt to evolving understanding whilst providing immediate protection against the most egregious harms.

Living with Addictive Intelligence

The genie will not return to the bottle. Conversational AI has achieved mainstream adoption with remarkable speed, embedding itself into educational, professional, and personal contexts. The question is not whether we will interact with AI, but how we will do so in ways that enhance rather than diminish human flourishing.

The tragedies of Sewell Setzer III and Juliana Peralta demand that we take AI addiction risks seriously. Yet premature pathologisation risks medicalising normal adoption of transformative technology. The challenge lies in developing clinical frameworks that identify genuine dysfunction whilst allowing beneficial use.

We stand at an inflection point. The next five years will determine whether AI-use disorders become a recognised clinical entity with validated diagnostic criteria and evidence-based treatments, or whether initial concerns prove overblown as users and society adapt to conversational AI's presence. Current evidence suggests the truth lies somewhere between these poles: genuine risks exist for vulnerable populations, yet population-level impacts remain modest.

The path forward requires vigilance without hysteria, research without delay, and intervention without overreach. Clinicians must learn to recognise and treat AI-related concerns even as diagnostic frameworks evolve. Developers must prioritise user wellbeing even when it conflicts with engagement metrics. Policymakers must protect vulnerable populations without stifling beneficial innovation. Users must cultivate digital wisdom, understanding both the utility and the risks of AI relationships.

Most fundamentally, we must resist the false choice between uncritical AI adoption and wholesale rejection. The technology offers genuine benefits, from mental health support for underserved populations to productivity enhancements for knowledge workers. It also poses genuine risks, from parasocial dependency to displacement of human relationships. Our task is to maximise the former whilst minimising the latter, a balancing act that will require ongoing adjustment as both the technology and our understanding evolve.

The compulsive mind meeting addictive intelligence creates novel challenges for mental health. But human ingenuity has met such challenges before, developing frameworks to understand and address dysfunctions whilst preserving beneficial uses. We can do so again, but only if we act with the urgency these tragedies demand, the rigor that scientific inquiry requires, and the wisdom that complex sociotechnical systems necessitate.


Sources and References

  1. Social Media Victims Law Center (2024-2025). Character.AI Lawsuits. Retrieved from socialmediavictims.org

  2. American Bar Association (2025). AI Chatbot Lawsuits and Teen Mental Health. Health Law Section.

  3. NPR (2024). Lawsuit: A chatbot hinted a kid should kill his parents over screen time limits.

  4. AboutLawsuits.com (2024). Character.AI Lawsuit Filed Over Teen Suicide After Alleged Sexual Exploitation by Chatbot.

  5. CNN Business (2025). More families sue Character.AI developer, alleging app played a role in teens' suicide and suicide attempt.

  6. AI Incident Database. Incident 826: Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails.

  7. Pew Research Center (2025). ChatGPT use among Americans roughly doubled since 2023. Short Reads.

  8. Montag, C., et al. (2025). The role of artificial intelligence in general, and large language models specifically, for understanding addictive behaviors. Annals of the New York Academy of Sciences. DOI: 10.1111/nyas.15337

  9. Springer Link (2025). Can ChatGPT Be Addictive? A Call to Examine the Shift from Support to Dependence in AI Conversational Large Language Models. Human-Centric Intelligent Systems.

  10. ScienceDirect (2025). Generative artificial intelligence addiction syndrome: A new behavioral disorder? Telematics and Informatics.

  11. PubMed (2025). People are not becoming “AIholic”: Questioning the “ChatGPT addiction” construct. PMID: 40073725

  12. Psychiatric Times. Chatbot Addiction and Its Impact on Psychiatric Diagnosis.

  13. ResearchGate (2024). Conceptualizing AI Addiction: Self-Reported Cases of Addiction to an AI Chatbot.

  14. ACM Digital Library (2024). The Dark Addiction Patterns of Current AI Chatbot Interfaces. CHI Conference on Human Factors in Computing Systems Extended Abstracts. DOI: 10.1145/3706599.3720003

  15. World Health Organization (2019-2022). Addictive behaviours: Gaming disorder. ICD-11 Classification.

  16. WHO Standards and Classifications. Gaming disorder: Frequently Asked Questions.

  17. BMC Public Health (2022). Functional impairment, insight, and comparison between criteria for gaming disorder in ICD-11 and internet gaming disorder in DSM-5.

  18. Psychiatric Times. Gaming Addiction in ICD-11: Issues and Implications.

  19. American Psychiatric Association (2013). Internet Gaming Disorder. DSM-5 Section III.

  20. Young, K. (2011). CBT-IA: The First Treatment Model for Internet Addiction. Journal of Cognitive Psychotherapy, 25(4), 304-312.

  21. Young, K. (2014). Treatment outcomes using CBT-IA with Internet-addicted patients. Journal of Behavioral Addictions, 2(4), 209-215. DOI: 10.1556/JBA.2.2013.4.3

  22. Abd-Alrazaq, A., et al. (2023). Systematic review and meta-analysis of AI-based conversational agents for promoting mental health and well-being. npj Digital Medicine, 6, 231. Published December 2023.

  23. JMIR (2025). Effectiveness of AI-Driven Conversational Agents in Improving Mental Health Among Young People: Systematic Review and Meta-Analysis.

  24. Nature Scientific Reports. Loneliness and suicide mitigation for students using GPT3-enabled chatbots. npj Mental Health Research.

  25. PMC (2024). User perceptions and experiences of social support from companion chatbots in everyday contexts: Thematic analysis. PMC7084290.

  26. Springer Link (2024). Mental Health and Virtual Companions: The Example of Replika.

  27. MIT Technology Review (2024). The allure of AI companions is hard to resist. Here's how innovation in regulation can help protect people.

  28. Frontiers in Psychiatry (2024). Artificial intelligence conversational agents in mental health: Patients see potential, but prefer humans in the loop.

  29. JMIR Mental Health (2025). Exploring the Ethical Challenges of Conversational AI in Mental Health Care: Scoping Review.

  30. Android Digital Wellbeing Documentation. Manage how you spend time on your Android phone. Google Support.

  31. Apple iOS. Screen Time and Family Sharing Guide. Apple Documentation.

  32. PMC (2024). Digital wellness or digital dependency? A critical examination of mental health apps and their implications. PMC12003299.

  33. Cureus (2025). Social Media Algorithms and Teen Addiction: Neurophysiological Impact and Ethical Considerations. PMC11804976.

  34. The Jed Foundation (2024). Tech Companies and Policymakers Must Safeguard Youth Mental Health in AI Technologies. Position Statement.

  35. The Regulatory Review (2025). Regulating Artificial Intelligence in the Shadow of Mental Health.

  36. Federal Trade Commission (2025). FTC Initiates Inquiry into Generative AI Developer Safeguards for Minors.

  37. State Attorneys General Coalition Letter (2025). Letter to Google, Meta, and OpenAI Regarding Child Safety in AI Chatbot Technologies. Bipartisan Coalition of 44 States.

  38. Business & Human Rights Resource Centre (2025). Character.AI restricts teen access after lawsuits and mental health concerns.


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...