More Connected, More Alone: How AI Is Eroding Human Social Skills

Hiromu Yakura noticed something strange about his own voice. A postdoctoral researcher at the Max Planck Institute for Human Development in Berlin, Yakura studies the intersection of artificial intelligence and human behaviour. But the shift he detected was not in his data; it was in his speech. “I realised I was using 'delve' more,” he told reporters, describing the unsettling moment he caught himself unconsciously parroting the verbal tics of a large language model. Yakura was not alone. His subsequent research, analysing over 360,000 YouTube videos and 771,000 podcast episodes, revealed that academic YouTubers had begun using words favoured by AI chatbots up to 51 per cent more frequently after ChatGPT's November 2022 launch. Words like “delve,” “realm,” “underscore,” and “meticulous” were migrating from machine-generated text into the mouths of actual humans. A cultural feedback loop had been set in motion, and hardly anyone had noticed.

This quiet linguistic contamination is just one symptom of a much broader transformation. Across industries, conversational AI has become the front line of customer interaction. Chatbots handle banking queries, voice assistants schedule medical appointments, and algorithmic agents negotiate insurance claims. The global AI customer service market, valued at $12.06 billion in 2024, is projected to reach $47.82 billion by 2030, according to industry analysts. Gartner has predicted that conversational AI deployments within contact centres will reduce agent labour costs by $80 billion in 2026, with approximately 17 million contact centre agents worldwide facing a fundamental reshaping of their roles. Bank of America's virtual assistant Erica has surpassed 3 billion client interactions since its 2018 launch, serving nearly 50 million users with an average response time of 44 seconds. The two million daily consumer interactions with Erica alone save the bank the equivalent of 11,000 employees' daily work. The efficiency gains are staggering, the convenience undeniable.

But as these systems grow more sophisticated, more emotionally responsive, and more deeply woven into the fabric of daily communication, a disquieting question presents itself. What happens to us, the humans on the other end of the line? If we spend our days talking to machines that never lose their patience, never misunderstand our tone, and never push back with the messy friction of genuine feeling, do we slowly lose the capacity to navigate the unpredictable terrain of real human conversation? The evidence is beginning to suggest that we might.

The Frictionless Trap

The appeal of conversational AI is rooted in something profoundly human: a desire to be understood quickly and without complication. When you call your bank and a voice assistant resolves your problem in under a minute, there is an undeniable satisfaction in the transaction. No hold music, no awkward small talk, no navigating the emotional state of a tired customer service representative at the end of a long shift. The interaction is clean, efficient, and entirely on your terms.

This is by design. The conversational AI industry has been engineered to minimise friction. McKinsey reports that 78 per cent of companies have now integrated conversational AI into at least one key operational area. A 2025 Nextiva analysis found that 57 per cent of businesses are either using self-service chatbots or plan to do so imminently. By 2027, Gartner projects, 25 per cent of organisations will use chatbots as their primary customer service channel. The technology is no longer experimental; it is infrastructural. And the economic incentives are overwhelming: companies report average returns of $3.50 for every dollar invested in AI customer service, with leading organisations achieving returns as high as eight times their investment.

Yet friction, as any psychologist will tell you, is precisely what builds social muscle. The small moments of discomfort in human interaction, the pauses, the misunderstandings, the need to read another person's expression and adjust your approach, these are the crucibles in which empathy is forged. Sherry Turkle, the Abby Rockefeller Mauz\u00e9 Professor of the Social Studies of Science and Technology at MIT, has spent decades studying how technology shapes human relationships. Her warning is direct: “What do we forget when we talk to machines? We forget what is special about being human.”

Turkle's concern is not that AI is inherently destructive, but that its seductive convenience trains us to avoid the very interactions that make us more fully human. In her research, she describes social media as a “gateway drug” to conversations with machines, arguing that the emotional scaffolding we once built through difficult, imperfect human dialogue is now being outsourced to algorithms that mirror our sentiments without ever genuinely understanding them. “AI offers the illusion of intimacy without the demands,” she has written. She challenges us to consider whether machines truly grasp empathy, or whether we are merely being “remembered” without being genuinely “heard.” The result is a kind of emotional atrophy; we become fluent in transactional exchange but increasingly clumsy at the real thing. The pushback and resistance of genuine human relationships, Turkle argues, are not obstacles to connection. They are the mechanism through which understanding and growth are forged.

Rewiring the Social Brain

The neurological implications of this shift are only beginning to come into focus. In a landmark 2025 paper published in the journal Neuron, Professor Benjamin Becker of the University of Hong Kong's Department of Psychology laid out a framework for understanding how interactions with AI might physically alter the social circuitry of the human brain. Becker's analysis, drawing on a meta-analysis of 1,302 functional MRI studies encompassing 47,083 activations, identified the “social brain” networks that enable rapid understanding and affiliation in interpersonal interactions. These are evolutionarily shaped circuits, refined over millennia of face-to-face human contact. They allow us to read facial expressions, interpret vocal tone, predict others' intentions, and calibrate our own behaviour in real time.

The problem, Becker argues, is that humans are hardwired to anthropomorphise. We instinctively attribute personality, feelings, and intentions to AI agents, a tendency psychologists call the “ELIZA effect,” named after a rudimentary 1960s chatbot that users nonetheless treated as a genuine therapist. The classic Heider and Simmel experiment demonstrated this tendency decades ago: humans intuitively interpret behaviour and motives even in simple moving geometric shapes. With AI agents that can modulate their voice, recall personal details, and respond with apparent emotional sensitivity, the anthropomorphic pull becomes far more powerful. As conversational AI becomes more advanced and personalised, Becker warns, these interactions will “increasingly engage neural mechanisms more deeply and may even change how brains function in social contexts.”

“Understanding how our social brain shapes interactions with AI and how AI interactions shape our social brains will be key to making sure these technologies support us, not harm us,” Becker stated. The implications are especially significant for young people, whose neural pathways for social cognition are still developing. If children and adolescents are forming their primary conversational habits with AI rather than with peers, parents, and teachers, the social brain may develop along fundamentally different lines than those of previous generations.

This is not merely theoretical. Research from Harvard's Graduate School of Education, led by Dr. Ying Xu, has examined how children interact differently with AI compared to humans. The findings are nuanced but concerning. While children can learn effectively from AI designed with pedagogical principles (improving vocabulary and comprehension through interactive dialogue), they consistently engage less deeply with AI than with human conversational partners. When speaking with a person, children are more likely to steer the conversation, ask follow-up questions, and share their own thoughts. With AI, they tend to become passive recipients, answering questions with less effort, particularly in complex exchanges that require genuine back-and-forth discussion.

The implication is clear: AI may teach children facts, but it struggles to teach them how to be present in a conversation. And that presence, that willingness to lean into the discomfort of not knowing what someone else will say next, is the foundation of social competence.

The Loneliness Paradox

Perhaps the most counterintuitive finding in recent AI research is this: the more people talk to chatbots, the lonelier they tend to feel. In early 2025, OpenAI and the MIT Media Lab published the results of a landmark study, a four-week randomised controlled experiment involving 981 participants who exchanged over 300,000 messages with ChatGPT. The researchers tested three interaction modes (text, neutral voice, and engaging voice) across three conversation types (open-ended, non-personal, and personal).

The headline finding was stark. “Overall, higher daily usage, across all modalities and conversation types, correlated with higher loneliness, dependence, and problematic use, and lower socialisation,” the researchers reported. Voice-based chatbots initially appeared to mitigate loneliness compared to text-based interactions, but these advantages disappeared at high usage levels, especially with a neutral-voice chatbot. Participants who trusted and “bonded” with ChatGPT more were likelier than others to be lonely and to rely on the chatbot further, creating a self-reinforcing cycle of dependency.

The study also revealed gender-specific effects. After four weeks of chatbot use, female participants were slightly less likely to socialise with other people than their male counterparts. Participants who interacted with ChatGPT's voice mode using a gender different from their own reported significantly higher levels of loneliness and greater emotional dependency on the chatbot. The researchers noted that people with a stronger tendency for attachment in relationships and those who viewed the AI as a friend were more likely to experience negative effects. Personal conversations, which included more emotional expression from both user and model, were associated with higher levels of loneliness but, intriguingly, lower emotional dependence at moderate usage levels.

Parallel to the controlled study, OpenAI and MIT analysed real-world data from close to 40 million ChatGPT interactions and surveyed 4,076 of those users. They found that emotional engagement with ChatGPT remains relatively rare in overall usage, but that the subset of users who do form emotional connections tend to be the platform's heaviest users, and the loneliest.

The Brookings Institution, in a July 2025 analysis by Rebecca Winthrop and Isabelle Hau, framed this as a defining paradox of our era: “We are living through a paradox: humans are wired to connect, yet we've never been more isolated. At the same time, AI is growing more responsive, conversational, and emotionally attuned, and we are increasingly turning to machines for what we're not getting from each other: companionship.” They noted that AI companions like Replika.ai, Character.ai, and China's Xiaoice now count hundreds of millions of emotionally invested users, with some estimates suggesting the total may already exceed one billion.

The Companion Economy and Its Discontents

The scale of emotional investment in AI companions has become impossible to ignore. Replika, one of the most prominent AI companion platforms, claims approximately 25 million users, with over 85 per cent reporting that they have developed emotional connections with their digital companion. The average user exchanges roughly 70 messages per day with their Replika. Character.AI users average 93 minutes per day on the platform, 18 minutes longer than the average TikTok session, while heavy Replika users report engagement of 2.7 hours daily, with extreme cases exceeding 12 hours.

A nationally representative survey of 1,060 teenagers conducted in spring 2025 found that 72 per cent of those aged 13 to 17 are already using AI companions, with roughly half using them at least a few times per month. About a third of teens reported using the technology for social interaction and relationships, including role-playing, romantic interactions, emotional support, friendship, or conversation practice. Perhaps most tellingly, around a third of teenagers using AI companions said they find conversations with these systems as satisfying, or more satisfying, than conversations with real-life friends.

The data on well-being is less comforting. Among 387 research participants in one study, “the more a participant felt socially supported by AI, the lower their feeling of support was from close friends and family.” Ninety per cent of the 1,006 American students using Replika who were surveyed for a separate study reported experiencing loneliness, significantly higher than the comparable national average of 53 per cent. Common Sense Media has recommended that no one under 18 should use AI companions like Character.AI or Replika until more safeguards are in place to “eliminate relational manipulation and emotional dependency risks.”

The regulatory landscape is beginning to respond. In September 2025, the California legislature passed a bill requiring AI platforms to clearly notify users under 18 when they are interacting with a bot. That same week, the Federal Trade Commission opened a broad inquiry into seven major firms, including OpenAI, Meta, Snap, Google, and Character Technologies, examining the potential for emotional manipulation and dependency. These are early steps, but they signal a growing recognition that the companion economy is not merely a consumer trend; it is a public health concern.

The Perception Problem

The social consequences of AI-mediated communication extend beyond individual loneliness into the texture of everyday human interaction. At Cornell University, research scientist Jess Hohenstein led a series of experiments investigating what happens when people suspect their conversational partner is using AI assistance. The results, published in Scientific Reports under the title “Artificial Intelligence in Communication Impacts Language and Social Relationships,” revealed a troubling dynamic.

When participants believed their partner was using AI-generated smart replies, they rated that partner as less cooperative, less affiliative, and more dominant, regardless of whether the partner was actually using AI. The mere suspicion of algorithmic assistance was enough to erode trust and social warmth. “I was surprised to find that people tend to evaluate you more negatively simply because they suspect that you're using AI to help you compose text, regardless of whether you actually are,” Hohenstein noted.

The study also found that actual use of smart replies increased communication efficiency and positive emotional language. But this improvement came at a cost: “While AI might be able to help you write, it's altering your language in ways you might not expect, especially by making you sound more positive. This suggests that by using text-generating AI, you're sacrificing some of your own personal voice,” Hohenstein observed.

Malte Jung, associate professor of information science at Cornell and a co-author on the study, drew a broader conclusion: “What we observe in this study is the impact that AI has on social dynamics and some of the unintended consequences that could result from integrating AI in social contexts. This suggests that whoever is in control of the algorithm may have influence on people's interactions, language and perceptions of each other.”

This finding raises uncomfortable questions about authenticity in an age of AI-assisted communication. If AI makes our messages more efficient and more positive but less recognisably our own, are we gaining convenience at the expense of genuine connection? And if the mere suspicion of AI involvement poisons the well of trust, what happens as AI becomes ubiquitous in workplace communication, dating apps, and even family group chats?

Speaking Like Machines

The Max Planck Institute research that caught Hiromu Yakura by surprise points to an even more fundamental concern: AI is not just changing how we communicate with machines; it is changing how we communicate with each other. The study identified twenty-one words that serve as clear markers of AI's linguistic influence. Terms favoured by large language models, “delve,” “realm,” “underscore,” “meticulous,” and others, were appearing with dramatically increased frequency in human speech, not just in written text but in spontaneous spoken communication. An analysis of 58 per cent of videos that showed no signs of scripted speech suggested that the adoption of these linguistic patterns extended beyond prepared remarks into genuinely extemporaneous conversation.

Levin Brinkmann, a co-author of the study at the Max Planck Institute, described the mechanism at work: “The patterns that are stored in AI technology seem to be transmitting back to the human mind.” The researchers characterised this as a “cultural feedback loop.” Humans train AI on their language; AI processes and statistically remixes that language; humans then unconsciously adopt the AI's patterns. The loop narrows with each iteration, potentially reducing linguistic diversity on a global scale. If AI systems trained primarily on English-language content begin to influence communication patterns worldwide, we might see a homogenisation of human expression that transcends national and cultural boundaries.

The concern extends beyond vocabulary. An analysis published by IE Insights in April 2025 argued that AI-driven platforms are “subtly teaching people to speak and think like machines, efficient, clear, emotionally detached.” The article warned that interactions are “increasingly optimised for clarity and brevity, but stripped of emotional depth, cultural nuance, and spontaneity that define authentic human connection.” It described a world in which “we are training machines to sound more human while simultaneously training ourselves to sound more like machines.” The impact, the analysis argued, is particularly dangerous in high-stakes environments where human nuance and emotional intelligence matter most: diplomacy, crisis negotiation, healthcare, and community care.

Emily Bender, a prominent linguist at the University of Washington, has observed that even people who do not personally use AI chatbots are not immune to this influence. The sheer volume of synthetic text now circulating online, in articles, emails, social media posts, and automated responses, makes it nearly impossible to avoid absorbing AI-inflected language patterns. The homogenisation is insidious precisely because it is invisible.

What the Public Already Senses

The American public appears to intuit, even if it cannot fully articulate, the social risks posed by AI. A Pew Research Centre survey of 5,023 U.S. adults conducted in June 2025 found that 50 per cent of Americans say they are more concerned than excited about the increased use of AI in daily life, up from 37 per cent in 2021. Only 10 per cent reported being more excited than concerned, while 38 per cent felt equally excited and concerned. More than half (57 per cent) rated the societal risks of AI as high, compared with just 25 per cent who said the benefits are high.

The data on social relationships is particularly striking. Half of respondents (50 per cent) said they believe AI will make people's ability to form meaningful relationships worse. The public fears the loss of human connection more than AI experts do: 57 per cent of U.S. adults expressed extreme or high concern about AI leading to less connection between people, versus only 37 per cent of surveyed experts. This 20-point gap between public anxiety and expert reassurance is itself revealing. It suggests either that everyday citizens are perceiving something that specialists are overlooking, or that proximity to AI development generates a form of optimism bias.

The generational divide is especially revealing. Among adults under 30, the cohort most likely to use AI regularly, 58 per cent believe AI will worsen people's ability to form meaningful relationships, and 61 per cent believe it will make people worse at thinking creatively. This is markedly higher than the roughly 40 per cent of those aged 65 and older who share those views. The generation most fluent in AI is also the generation most anxious about what it might cost them.

Two-thirds of respondents (66 per cent) said AI should not judge whether two people could fall in love, and 73 per cent said AI should play no role in advising people about their faith. These are not merely policy preferences; they are boundary markers, lines drawn around the domains of human experience that people consider too sacred, too intimate, or too complex for algorithmic mediation.

The Agents Left Behind

The workplace effects of conversational AI adoption are already visible in the customer service industry itself. As chatbots handle an ever-larger share of routine interactions, the calls that do reach human agents are increasingly complex, emotionally charged, and difficult to resolve. This creates a cascading paradox: the agents who remain employed need greater social skills than ever, even as the broader population is getting less practice at the kind of difficult conversations these agents must navigate daily.

Recent industry data illustrates the toll. According to one analysis, 87 per cent of contact centre agents report high stress levels, and over 50 per cent face daily burnout, sleep issues, and emotional exhaustion. The automation of simple queries means agents now spend a disproportionate share of their working hours handling angry customers, technical problems that defy standard solutions, and emotionally charged conversations demanding empathy and judgement. More than 68 per cent of agents receive calls at least weekly that their training did not prepare them to handle.

A 2025 CX-focused study found that 79 per cent of Americans strongly prefer interacting with a human over an AI agent, and a Twilio report from the same year revealed that 78 per cent of consumers consider it important to be able to switch from an AI agent to a human one. Meanwhile, a Kinsta report found that 50 per cent of consumers would cancel a service if it were solely AI-driven. The message from customers is clear: they want efficiency, but not at the price of human presence.

The tension between economic incentive and human need creates a troubling dynamic. The global chatbot market, valued at roughly $15.6 billion in 2024, is expected to nearly triple to $46.6 billion by 2029. Every interaction that moves from human to machine represents a small reduction in the total volume of genuine interpersonal exchange in society. Multiply this across billions of interactions per year, and the cumulative effect on collective social skills becomes a legitimate concern.

Raising Children in the Age of the Algorithm

The stakes are highest for the youngest members of society. UNICEF's December 2025 guidance on AI and children, now in its third edition, acknowledged that large language models are becoming “deeply embedded in daily life as conversational agents, evolving into companions for emotional support and social interaction.” The guidance flagged this trend as “particularly pronounced among children and adolescents, a demographic prone to forming parasocial relationships with AI chatbots.” It warned that youth are “uniquely vulnerable to manipulation due to neurodevelopmental changes.”

Research on joint media engagement, studying what happens when parents are present during children's AI interactions, offers a partial counterweight. When caregivers scaffold AI interactions, helping children process what they are hearing, encouraging them to question and respond actively, the developmental risks appear to diminish. But this requires time, attention, and digital literacy that not all families possess in equal measure.

The Harvard research from Dr. Ying Xu highlights a critical distinction: children who engage in interactive dialogue with AI can comprehend stories better and learn more vocabulary compared to passive listeners, and in some cases, learning gains from AI were even comparable to those from human interactions. But learning facts and developing social-emotional intelligence are fundamentally different processes. AI can drill vocabulary; it cannot model the subtle art of reading a room, sensing another person's discomfort, or knowing when to stay silent. The risk is not that children will stop learning. The risk is that they will learn everything except how to be with other people.

Recalibrating, Not Retreating

The picture that emerges from the research is neither straightforwardly dystopian nor naively optimistic. It is, instead, deeply complicated. Conversational AI offers genuine benefits: accessibility for people with disabilities, support for those experiencing isolation, efficiency in service delivery, and learning tools that can supplement (though not replace) human instruction. Stanford researchers found that while young adults using the AI chatbot Replika reported high levels of loneliness, many also felt emotionally supported by it, with 3 per cent crediting the chatbot for temporarily halting suicidal thoughts. The question is not whether to use these technologies, but how to use them without surrendering the skills that make us most distinctively human.

A 2025 study published in the Journal of Systems Science and Systems Engineering offers an instructive finding. Across two scenario studies and one laboratory experiment, researchers found that consumers exhibited higher prosocial intentions after interacting with socially oriented AI chatbots (those designed to build rapport and engage emotionally) compared to task-oriented ones (those focused purely on efficiency). The study revealed that social presence and empathy mediated this effect, suggesting that the design of AI systems meaningfully shapes their social consequences. This is not a trivial insight. It means that the choices made by engineers, product managers, and policymakers about how AI communicates will have ripple effects across the social fabric.

Professor Becker's neuroscience framework points in the same direction. The social brain is not fixed; it is plastic, shaped by the interactions it encounters. If those interactions are predominantly with machines that reward brevity and compliance, the brain will adapt accordingly. But if AI systems are designed to encourage, rather than replace, genuine human engagement, the technology could serve as a bridge rather than a barrier.

The Brookings Institution's Rebecca Winthrop and Isabelle Hau offered perhaps the most pointed formulation: the age of AI must not become “the age of emotional outsourcing.” The restoration of real human connection requires not a rejection of technology, but a deliberate, society-wide commitment to preserving the spaces, skills, and habits that sustain authentic relationships.

The Conversation We Need to Have

Sherry Turkle has described her decades of research as “not anti-technology, but pro-conversation.” That framing captures what is most urgently needed now. The rapid adoption of conversational AI in customer service, healthcare, education, and personal companionship is not inherently destructive. But it is proceeding at a pace that far outstrips our collective understanding of its social consequences.

The evidence assembled here, from neuroscience laboratories in Hong Kong to linguistics studies in Berlin, from controlled experiments at MIT to population surveys by Pew Research, converges on a single uncomfortable truth: the more seamlessly machines learn to talk like us, the greater the risk that we forget how to talk to each other. Not efficiently, not optimally, not in the polished cadence of a well-trained language model, but in the halting, imperfect, gloriously messy way that humans have always communicated. With pauses. With misunderstandings. With the kind of friction that, it turns out, is not a bug in the system of human connection. It is the entire point.

The voice recognition systems now achieving 95 per cent accuracy under ideal conditions and processing billions of interactions daily are marvels of engineering. The global voice and speech recognition market, valued at $14.8 billion in 2024, is projected to reach $61.27 billion by 2033. But accuracy in speech recognition is not the same as accuracy in human understanding. As we optimise our AI systems to hear every word, we might ask whether we are simultaneously losing our capacity to listen, truly listen, to one another.

The conversation about conversational AI has barely begun. It needs to move beyond the boardroom metrics of cost savings and efficiency gains, beyond the engineering challenges of word error rates and natural language processing, and into the deeper territory of what kind of society we are building when the first voice many of us hear each morning, and the last one we hear at night, belongs not to another human being but to a machine that has learned, with remarkable precision, to sound like one.


References and Sources

  1. Yakura, H. and Brinkmann, L. et al. “Empirical evidence of Large Language Model's influence on human spoken communication.” Max Planck Institute for Human Development. arXiv:2409.01754. 2024. https://arxiv.org/html/2409.01754v1

  2. Gartner, Inc. “Gartner Predicts Conversational AI Will Reduce Contact Center Agent Labor Costs by $80 Billion in 2026.” Press release, 31 August 2022. https://www.gartner.com/en/newsroom/press-releases/2022-08-31-gartner-predicts-conversational-ai-will-reduce-contac

  3. Bank of America. “A Decade of AI Innovation: BofA's Virtual Assistant Erica Surpasses 3 Billion Client Interactions.” Press release, August 2025. https://newsroom.bankofamerica.com/content/newsroom/press-releases/2025/08/a-decade-of-ai-innovation--bofa-s-virtual-assistant-erica-surpas.html

  4. Turkle, Sherry. “Reclaiming Conversation in the Age of AI.” After Babel. 2024. https://www.afterbabel.com/p/reclaiming-conversation-age-of-ai

  5. Turkle, Sherry. NPR interview on the psychological impacts of bot relationships. 2 August 2024. https://www.npr.org/2024/08/02/g-s1-14793/mit-sociologist-sherry-turkle-on-the-psychological-impacts-of-bot-relationships

  6. Becker, Benjamin. “Will our social brain inherently shape, and be shaped by, interactions with AI?” Neuron 113: 2037-2041. 2025. DOI: 10.1016/j.neuron.2025.04.034. https://www.cell.com/neuron/abstract/S0896-6273(25)00346-0

  7. Xu, Ying. “AI's Impact on Children's Social and Cognitive Development.” Harvard Graduate School of Education and Children and Screens. 2024. https://www.gse.harvard.edu/ideas/edcast/24/10/impact-ai-childrens-development

  8. OpenAI and MIT Media Lab. “How AI and Human Behaviors Shape Psychosocial Effects of Extended Chatbot Use: A Longitudinal Randomized Controlled Study.” March 2025. https://arxiv.org/html/2503.17473v2

  9. OpenAI. “Early methods for studying affective use and emotional well-being on ChatGPT.” March 2025. https://openai.com/index/affective-use-study/

  10. Hohenstein, Jess; Jung, Malte; and Kizilcec, Rene. “Artificial Intelligence in Communication Impacts Language and Social Relationships.” Scientific Reports. April 2023. https://news.cornell.edu/stories/2023/04/study-uncovers-social-cost-using-ai-conversations

  11. Pew Research Center. “How Americans View AI and Its Impact on Human Abilities, Society.” Survey of 5,023 U.S. adults, June 2025. Published 17 September 2025. https://www.pewresearch.org/science/2025/09/17/how-americans-view-ai-and-its-impact-on-people-and-society/

  12. Winthrop, Rebecca and Hau, Isabelle. “What happens when AI chatbots replace real human connection.” Brookings Institution. July 2025. https://www.brookings.edu/articles/what-happens-when-ai-chatbots-replace-real-human-connection/

  13. IE Insights. “The Social Price of AI Communication.” IE University. April 2025. https://www.ie.edu/insights/articles/the-social-price-of-ai-communication/

  14. Nextiva. “50+ Conversational AI Statistics for 2026.” 2026. https://www.nextiva.com/blog/conversational-ai-statistics.html

  15. UNICEF. “Guidance on AI and Children 3.0.” December 2025. https://www.unicef.org/innocenti/media/11991/file/UNICEF-Innocenti-Guidance-on-AI-and-Children-3-2025.pdf

  16. Twilio. “Customer Engagement Report.” 2025. Referenced in SurveyMonkey, “Customer Service Statistics 2026.” https://www.surveymonkey.com/curiosity/customer-service-statistics/

  17. Fortune. “Linguists say ChatGPT is now influencing how humans write and speak.” 30 June 2025. https://fortune.com/2025/06/30/linguists-chatgpt-influencing-how-humans-write-speak/

  18. Journal of Systems Science and Systems Engineering. “Beyond Consumption-Relevant Outcomes: The Role of AI Customer Service Chatbots' Communication Styles in Promoting Societal Welfare.” 2025. https://journal.hep.com.cn/jossase/EN/10.1007/s11518-025-5674-8

  19. Straits Research. “Voice and Speech Recognition Market Size, Share and Forecast to 2033.” 2024. https://straitsresearch.com/report/voice-and-speech-recognition-market

  20. CX Today. “The Algorithm Never Blinks: Why Contact Center AI is Creating a New Kind of Agent Burnout.” 2025. https://www.cxtoday.com/contact-center/the-algorithm-never-blinks-why-contact-center-ai-is-creating-a-new-kind-of-agent-burnout/

  21. Common Sense Media. Referenced in Christian Post, “Advocate warns against teen use of AI companions as study shows heavy use by demographic.” 2025. https://www.christianpost.com/news/72-percent-of-teens-are-using-ai-companions-as-advocates-raise-concern.html

  22. Nikola Roza. “Replika AI: Statistics, Facts and Trends Guide for 2025.” https://nikolaroza.com/replika-ai-statistics-facts-trends/

  23. Ada Lovelace Institute. “Friends for sale: the rise and risks of AI companions.” 2025. https://www.adalovelaceinstitute.org/blog/ai-companions/


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...