Send My Clone: AI Avatars and the Death of Meeting Trust

In May 2025, something quietly extraordinary happened during Klarna's quarterly earnings call. Sebastian Siemiatkowski, the fintech company's co-founder and chief executive, appeared on screen to walk investors through the numbers. He looked like Siemiatkowski. He sounded like Siemiatkowski. But within seconds, the figure on screen confessed: it was not Siemiatkowski at all. It was an AI-generated avatar, trained on the CEO's likeness and voice, delivering the company's financial highlights while the real Siemiatkowski was elsewhere. The avatar did not blink quite as often as a human would, and the voice synchronisation was good but not flawless. Still, the message was clear: the era of sending your digital double to do your talking has arrived.
A day later, Zoom's own chief executive, Eric Yuan, did much the same thing, deploying an AI avatar of himself during an earnings presentation. The timing was hardly coincidental. Yuan had been evangelising the concept of “digital twins” since mid-2024, telling audiences at Fortune that people would eventually send their AI-powered replicas to future meetings so they could “go to the beach” instead. By TechCrunch Disrupt 2025, he was making bolder predictions: AI would enable three-to-four-day working weeks by 2030, partly because digital replicas could handle routine meetings while the flesh-and-blood human focused on higher-value work. In March 2026, Zoom formally rolled out photorealistic AI avatars as a product feature, promising lifelike figures that mirror a user's expressions, lip movements, and eye movements so that people can “be present” even when they are not camera-ready, or not present at all.
This is not science fiction any longer. It is a shipping product. And it forces a question that the technology industry, corporate boardrooms, and philosophers of mind alike are only beginning to grapple with seriously: when an AI avatar attends a meeting on your behalf, are the other participants being deceived? And does it matter?
The Spectrum of Standing In
To understand why this question is more complicated than it first appears, it helps to recognise that meetings have always involved varying degrees of presence, attention, and substitution.
Consider the humble out-of-office auto-reply, a digital stand-in that has existed for decades. No one considers it deceptive when a colleague's email bot informs you they are unavailable. Move up the spectrum and you find shared calendars where assistants accept invitations on an executive's behalf, or junior colleagues who “represent” a department without the senior leader's direct involvement. The video call itself, which became the default mode of professional interaction during the pandemic years, already introduced a layer of mediation between participants. Filters smooth skin. Virtual backgrounds conceal messy kitchens. Gallery views flatten hierarchies into a grid of equally sized rectangles. None of this is typically described as deception, yet each element subtly manipulates the impression one participant forms of another.
AI avatars occupy a new and considerably more potent position on this spectrum. When Zoom's Steve Rafferty, the company's head of APAC and EMEA, used his AI avatar to introduce a quarterly meeting in fluent French, he was not simply delegating a task; he was projecting a version of himself that could do something he could not. Rafferty's team spans from the Arctic Circle to Antarctica, covering roughly sixty different languages, and the avatar allowed him to deliver a personal, multilingual message at scale. The tool cannot yet interact with other participants or answer questions in real time, but the direction of travel is unmistakable.
The crucial distinction is between transparent substitution and covert impersonation. If everyone in the meeting knows they are watching an AI avatar, the dynamic is fundamentally different from a scenario where participants believe they are speaking to a living, breathing human being who happens to be on camera. The first is a communication tool. The second is, by most reasonable definitions, a form of deception. But between these two poles lies an enormous grey zone: the avatar that is technically disclosed but functionally indistinguishable from the real person; the avatar whose presence is noted in a meeting invitation that nobody reads; the avatar that begins as a disclosed introduction but seamlessly transitions into a conversation that feels, to other participants, like a human exchange. The spectrum of standing in, it turns out, is not a spectrum at all. It is a fog.
What Philosophers Make of Digital Doubles
The philosophical landscape here is richer than the technology industry tends to acknowledge. Luciano Floridi, the founding director of Yale University's Digital Ethics Center and a professor at the University of Bologna, has spent years developing an ethical framework for artificial intelligence built around five principles: beneficence, nonmaleficence, autonomy, justice, and explicability. Floridi's work on deepfakes is particularly relevant. He argues that AI-generated synthetic media has the capacity to undermine our confidence in the original, genuine, authentic nature of what we see and hear. The threat is not merely that a specific piece of content might mislead; it is that the very existence of convincing synthetic media corrodes the epistemic foundations on which trust depends.
Apply this framework to the meeting avatar scenario and the implications are sobering. A meeting is not just an exchange of information; it is a social contract. Participants implicitly agree to be present, to listen, to respond in good faith. When one party secretly outsources their participation to a machine, they violate not just the expectation of presence but the norms of reciprocity that make collaborative work possible. The person who sent the avatar may receive a neat summary afterwards, but their counterparts invested real cognitive and emotional effort into an interaction they believed was mutual. That imbalance is not a minor technical detail. It is a breach of the implicit bargain that makes professional relationships function.
From a Kantian perspective, the issue is equally stark. Immanuel Kant's categorical imperative holds that one should act only according to principles that could be universalised without contradiction. If everyone sent avatars to every meeting, the meeting itself would cease to function as a space for genuine human deliberation. The universalisation test fails spectacularly: a world in which all meeting participants are AI avatars is a world in which meetings are simply algorithms talking to algorithms, with no humans in the loop at all. The very concept of a “meeting” presupposes the meeting of minds, not the collision of language models.
Yet utilitarians might see the matter differently. If an AI avatar can represent its principal accurately, freeing that person to do more meaningful work or simply to rest, the aggregate benefit might outweigh the discomfort of reduced authenticity. PwC's 2025 Global Workforce Hopes and Fears Survey, which interviewed nearly 50,000 workers across 48 economies and 28 sectors, found that daily users of generative AI reported being more productive (92 per cent, compared to 58 per cent of infrequent users), with higher perceived job security and pay. If avatars extend these productivity gains by reclaiming hours lost to routine meetings, the utilitarian calculus could tip in their favour. The question then becomes empirical: does the avatar actually represent the person faithfully, or does it introduce distortions, biases, and errors that compound over time?
The Markkula Center for Applied Ethics at Santa Clara University published a case study examining precisely these tensions. The centre frames the discussion through multiple ethical lenses, including rights, justice, utilitarianism, the common good, virtue, and care ethics, and invites readers to consider what obligations a person has to disclose their use of an avatar. The case study does not offer a tidy resolution. Instead, it highlights that the ethics of meeting avatars depend heavily on context: who is in the meeting, what is at stake, whether disclosure has occurred, and what alternatives exist.
Consent, Disclosure, and the Trust Deficit
If the philosophical arguments suggest that undisclosed avatar use is ethically problematic, the practical question becomes: what kind of disclosure is sufficient?
Zoom's own approach offers one model. When the company's AI Companion joins a third-party meeting to transcribe and summarise, it automatically posts a message in the meeting chat identifying itself as a bot and indicating that it is transcribing. Its video tile displays the word “Transcribing” alongside the Zoom AI Companion logo. This is transparency by design, built into the product architecture so that disclosure is not left to the discretion of individual users.
But the new photorealistic avatar feature complicates this model considerably. If the avatar looks and sounds convincingly like a real person, a small chat notification may not be enough to prevent participants from believing they are interacting with a human. The gap between what the technology can simulate and what a text disclaimer can effectively communicate grows wider with each improvement in rendering fidelity, voice synthesis, and facial animation. There is an old principle in design: if you have to explain it, you have already failed. When a photorealistic avatar requires a text disclaimer to prevent deception, the product itself is designed in a way that defaults to misleading.
Zoom appears to recognise this tension. Alongside its avatar rollout in March 2026, the company introduced deepfake-detection technology for meetings, providing real-time alerts when synthetic audio or video is detected. This is a notable acknowledgement that the very product Zoom is selling, convincing digital replicas of real people, simultaneously creates a security and trust risk that requires countermeasures. It is as though a locksmith, having sold you the world's most sophisticated lock-picking kit, also offers to install a better deadbolt.
The broader data on consumer attitudes reinforces the concern. Research consistently shows that the vast majority of people value authentic content and view undisclosed AI usage as a breach of trust. More than half of consumers surveyed demand explicit disclosure when AI-generated video, images, or avatars are used, and younger demographics, particularly Generation Z, tend to view AI-generated content as inauthentic and unethical when it is not clearly labelled.
This creates a paradox for companies eager to deploy the technology. The more convincing the avatar, the more useful it is as a communication tool, but the more convincing it is, the greater the expectation of disclosure, and the more disclosure undermines the illusion of natural presence that makes the avatar appealing in the first place. Call it the uncanny valley of trust: as the technology improves, it enters a zone where it is good enough to deceive but not good enough to make deception acceptable.
The Legal Landscape Takes Shape
Regulators have not been idle. The legal framework surrounding AI-generated likenesses, synthetic media, and digital avatars has expanded rapidly across multiple jurisdictions, creating a patchwork of obligations that any organisation deploying meeting avatars must navigate.
In the European Union, Article 50 of the AI Act establishes transparency obligations for providers and deployers of AI systems that generate or manipulate content constituting a deepfake. The rules require that such content be clearly disclosed as artificially generated or manipulated. These transparency provisions are set to take full effect in August 2026, with a Code of Practice expected to be finalised in mid-2026 to establish practical standards. The scope is broad: the EU's framework covers AI-generated text, audio, video, images, avatars, and digital twins. For any multinational corporation considering the deployment of meeting avatars across European operations, the compliance obligations are substantial and the penalties for failure significant.
In the United States, the regulatory picture is more fragmented but no less active. As of early 2026, forty-six states have enacted legislation targeting AI-generated media in some form. In 2025 alone, 146 bills were introduced to state legislatures that included language specific to AI deepfakes. The federal TAKE IT DOWN Act, passed in 2025, represents America's first national law directly regulating deepfake abuse, though its primary focus is nonconsensual intimate content rather than business communications. At the state level, Tennessee's ELVIS Act (Ensuring Likeness, Voice, and Image Security) prohibits the unauthorised commercial use of a person's voice, including AI-generated replications. California's AB 2602, effective from January 2025, renders unenforceable any contract provision that allows for the creation of a digital replica of an individual's likeness in place of work the individual would have otherwise performed in person, unless the contract includes a reasonably specific description of intended uses and the individual had professional legal representation.
Morrison Foerster, the global law firm, published an extensive analysis in September 2025 noting that digital avatars sit at the nexus of several evolving legal regimes, including intellectual property rights, publicity rights, and consumer protection. The firm's assessment is unambiguous: companies deploying digital avatars must navigate a complex and rapidly shifting regulatory environment, and the cost of noncompliance is rising.
The Federal Trade Commission has also signalled its intent to act. Fines for “deceptive synthetic endorsements” now reach fifty thousand dollars per violation, a figure that concentrates the mind of any marketing or communications department considering avatar deployment without adequate disclosure. What remains unclear is whether a meeting avatar that participates in a business discussion without disclosure constitutes a “deceptive” practice under existing consumer protection law, or whether new legislative categories will be needed to address this specific use case.
Corporate Adoption and the Productivity Seduction
Despite the ethical and legal headwinds, the commercial momentum behind AI avatars is formidable. The productivity case is compelling on its face. If a digital twin can attend a routine status update, freeing its human counterpart to focus on strategic thinking, creative work, or simply recovering from meeting fatigue, the efficiency gains could be substantial. Microsoft has moved aggressively in this direction: at Ignite 2025, the company revealed that its Copilot agents had evolved from “helping with work to handling it on your behalf,” with autonomous capabilities governed through permission scopes, approval workflows, and execution logging. The Facilitator agent in Microsoft Teams can drive agendas, take notes, keep meetings on track, and manage actions, edging closer to a future where human attendance becomes optional.
Otter.ai, which reached one hundred million dollars in annual recurring revenue in 2025, exemplifies the trajectory from the startup side. The company has evolved from a passive transcription tool into an active meeting agent that can attend, summarise, and act on discussions. Its enterprise suite includes AI agents for sales teams, autonomous product demonstrations, and a comprehensive search capability spanning an organisation's entire meeting archive. Otter claims that for the average enterprise customer, the platform saves the equivalent workload of one full-time employee for every twenty users, translating to a ten-to-one return on investment. For a one-thousand-user organisation, that translates to fifty full-time equivalents' worth of work saved, or more than six million dollars in annual cost savings.
Dan Thomson, the founder and chief executive of Sensay, a startup that creates AI replicas of employees, has gone further still. Thomson, who holds a BA in Philosophy from King's College London and an MBA from the University of Cambridge, uses his own digital twin to draft replies to emails and messages, estimating that it saves him hours each day. Sensay's digital replicas are trained on employees' own materials and communications, and Thomson has cited examples where deploying a digital persona on a company website increased online conversions by three hundred per cent and reduced support costs by fifty to seventy per cent.
The appeal is obvious. But the question of whether an AI avatar can truly “represent” someone in a meeting raises deeper issues about what representation means. A human delegate sent to a meeting can exercise judgement, read the room, improvise, push back, and make commitments. Today's AI avatars can, at best, deliver prepared remarks, summarise known information, and answer simple questions drawing on a corpus of the principal's past communications. They cannot negotiate in real time, pick up on subtle social cues, or take responsibility for the consequences of what they say. They cannot feel embarrassment when they get something wrong, and they cannot feel the weight of a promise they have made.
This gap between capability and expectation is where the greatest risk of deception lies. If participants believe they are engaging with a person who can make decisions and commitments, but are in fact speaking to a language model with a convincing face, the resulting misunderstandings could have real consequences for contracts, relationships, and organisational trust.
Cultural Fault Lines
Attitudes toward AI avatars are not uniform across cultures, and the global rollout of these technologies will inevitably encounter varying norms around presence, formality, and authenticity.
Japan offers a particularly instructive case. The country has a distinctive openness to AI-based technologies, including robots and avatars, rooted in cultural attitudes that have long embraced the idea of machines coexisting with humans. The Japanese government's Moonshot Goal 1 programme aims to realise a society where humans can be free from limitations of body, brain, space, and time by 2050, explicitly including “cybernetic avatars” as part of that vision. The adoption rate of generative AI among Japanese users rose from 33.5 per cent in February 2024 to 42.5 per cent in February 2025, reflecting a methodical but steady embrace of the technology. Japan's approach to AI governance, as highlighted by the World Economic Forum in January 2026, prioritises how institutions adapt and govern AI rather than what specific technologies they adopt, a philosophical distinction that could shape how meeting avatars are regulated in the region.
Yet even in Japan, the business culture's preference for careful evaluation before widespread implementation suggests that avatar adoption in high-stakes meetings will proceed cautiously. Companies like Hakuhodo, through its Human-Centred AI Institute, emphasise using AI as a “co-pilot” to enhance creativity rather than replace human presence, a framing that implicitly acknowledges the importance of the human element in professional interactions.
In cultures where personal relationships and face-to-face trust-building are paramount, such as many Middle Eastern and Latin American business environments, the introduction of AI avatars into meetings could be perceived as fundamentally disrespectful, a signal that the absent party does not value the relationship enough to show up in person. Conversely, in cultures that prize efficiency and directness, an avatar that delivers a crisp, well-prepared message might be received more warmly than a distracted, multitasking human on a video call.
The cultural dimension matters because it reveals that the question of deception is not purely philosophical or legal; it is also deeply social. What counts as deceptive depends on shared expectations, and those expectations vary enormously across contexts. A practice considered efficient and pragmatic in one business culture may be experienced as insulting or dishonest in another. Any regulatory framework that ignores this variation risks being either toothless or oppressive, depending on where it is applied.
The Asymmetry Problem
Perhaps the most troubling aspect of AI meeting avatars is the asymmetry they introduce into professional relationships. When one party sends an avatar and the other does not know, the avatar-sender gains an informational advantage: they receive a summary of the meeting without having invested the time or cognitive effort to participate, while the other participants have engaged in good faith, believing they were building a relationship with a person.
This asymmetry is not merely inconvenient; it restructures power dynamics in ways that could erode the foundations of professional trust. If colleagues, clients, or business partners come to suspect that they might be talking to an avatar at any given time, the baseline level of trust in all video interactions could decline. Every call becomes potentially suspect. Every participant must wonder: is that really you?
PwC's 2025 survey data is instructive here as well. The research found that only 14 per cent of workers use generative AI daily, but those who do report dramatically different experiences of productivity and security compared to those who do not. This gap creates a two-tier workforce: those who leverage AI tools (potentially including meeting avatars) and those who do not, with the former gaining significant advantages that may be invisible to the latter. When that advantage extends to sending an undisclosed avatar to a meeting, the information asymmetry becomes an ethical asymmetry as well.
The 2025 Edelman Trust Barometer documented growing concerns about AI's impact on societal trust, and the deployment of meeting avatars without robust disclosure norms could accelerate that erosion. Research on workplace trust from 2026 found that teams experiencing breakdowns in recognition and authentic interaction showed significantly higher turnover rates, with an average lead time of eighty-seven days between the first detectable decline in genuine connection and a resignation.
The irony is sharp: a technology designed to free people from the drudgery of unnecessary meetings could end up making all meetings less meaningful by injecting doubt into the fundamental question of whether anyone is really there.
Navigating What Comes Next
So what should organisations, regulators, and individuals do? The answer is unlikely to be a blanket prohibition. AI avatars offer genuine benefits, from multilingual communication to accessibility for people with disabilities or chronic health conditions that make sustained video presence difficult. The technology is here, and it will improve.
What matters is the framework within which it is deployed. Several principles seem essential.
First, disclosure must be mandatory, not optional. Any meeting participant represented by an AI avatar should be required to inform other participants before the meeting begins, not buried in a chat message that might be missed, but through a clear, unavoidable notification. Zoom's deepfake detection feature is a useful backstop, but it should not be the primary mechanism for ensuring transparency. The EU AI Act's transparency obligations, due to take full effect in August 2026, offer a model: providers of AI systems must ensure machine-readable marking and detectability of AI-generated content, placing the burden on the technology companies rather than on individual users to opt into honesty.
Second, organisations need clear policies distinguishing between contexts where avatar use is acceptable and where it is not. A pre-recorded avatar delivering a company-wide update is categorically different from an avatar participating in a negotiation, a performance review, or a client pitch. The stakes, the expectations of presence, and the potential for harm differ dramatically across these scenarios. Internal guidelines should specify which meeting types permit avatar representation and which require genuine human attendance.
Third, the legal frameworks emerging across the EU, the United States, and elsewhere need to address the meeting-avatar use case specifically. Current legislation focuses heavily on deepfakes in political communications and nonconsensual intimate content, which are unquestionably important, but the professional communications context presents its own distinct challenges around consent, representation, and liability. If an avatar makes a commitment during a negotiation, who is legally bound? If an avatar misrepresents a position because it drew on outdated training data, who bears the responsibility? These questions need answers before, not after, the technology becomes ubiquitous.
Fourth, the technology companies building these tools bear a responsibility that extends beyond simply adding disclosure features. They must actively consider the incentive structures their products create. If the default setting makes it easy to send an avatar without disclosure and difficult to opt into transparency, the predictable result is widespread undisclosed use, regardless of what the terms of service say.
Finally, individuals must reckon with what they owe to the people they work with. Sending an avatar to a meeting is not inherently wrong, but doing so without telling anyone is a choice to prioritise convenience over honesty. In a professional culture already strained by remote work, algorithmic management, and the ambient anxiety of automation, that choice carries weight.
The Real Question Behind the Question
The debate over AI meeting avatars is, at its core, a debate about what we believe meetings are for. If meetings are simply information-exchange mechanisms, then avatars are a logical optimisation: a more efficient way to transmit and receive data. But if meetings are also spaces for relationship-building, for reading tone and body language, for the subtle negotiations of trust that underpin every working partnership, then the introduction of a convincing but non-sentient stand-in changes the nature of the interaction in ways that matter.
The discomfort many people feel about AI avatars attending meetings is not irrational technophobia. It is an intuition about something important: that presence is not just about being seen and heard, but about being accountable. A person who is genuinely present in a meeting can be surprised, challenged, moved, and changed by what happens there. An avatar cannot. It can only perform the appearance of those responses.
Whether that performance constitutes deception depends, ultimately, on whether it is disclosed. An avatar that announces itself as an avatar is a tool. An avatar that pretends to be a person is a lie. The line between the two is thin, and the technology industry's track record of respecting thin ethical lines is not, to put it diplomatically, encouraging.
As these tools proliferate through the spring and summer of 2026, the choices made by companies like Zoom and Microsoft, by regulators in Brussels and Washington, and by the millions of professionals deciding whether to click “send my avatar” will shape the norms of professional trust for years to come. The technology is neither good nor evil. But the decision to use it honestly, or not, very much is.
References and Sources
TechCrunch, “Klarna used an AI avatar of its CEO to deliver earnings, it said,” May 2025. https://techcrunch.com/2025/05/21/klarna-used-an-ai-avatar-of-its-ceo-to-deliver-earnings-it-said/
TechCrunch, “After Klarna, Zoom's CEO also uses an AI avatar on quarterly call,” May 2025. https://techcrunch.com/2025/05/22/after-klarna-zooms-ceo-also-uses-an-ai-avatar-on-quarterly-call/
TechCrunch, “Zoom CEO Eric Yuan says AI will shorten our workweek,” October 2025. https://techcrunch.com/2025/10/27/zoom-ceo-eric-yuan-says-ai-will-shorten-our-workweek/
TechCrunch, “Zoom introduces an AI-powered office suite, says AI avatars for meetings arrive this month,” March 2026. https://techcrunch.com/2026/03/10/zoom-launches-an-ai-powered-office-suite-says-ai-avatars-for-meetings-are-coming-soon/
Raconteur, “Tech CEOs are sending their AI avatars to meetings,” 2025. https://www.raconteur.net/technology/ai-avatars-meetings
Fortune, “Zoom founder Eric Yuan wants 'digital twins' to attend meetings for you so you can 'go to the beach' instead,” June 2024. https://fortune.com/2024/06/05/zoom-founder-eric-yuan-digital-ai-twins-attend-meetings-for-you/
Markkula Center for Applied Ethics, Santa Clara University, “Meeting Avatars: An AI Ethics Case Study.” https://www.scu.edu/ethics/focus-areas/internet-ethics/resources/meeting-avatars-an-ai-ethics-case-study/
Zoom Support, “Enabling or disabling AI Companion to join third-party meetings for meeting summaries.” https://support.zoom.com/hc/en/article?id=zm_kb&sysparm_article=KB0080357
Zoom Newsroom, “New AI innovations for Zoom Workplace simplify and scale teamwork,” March 2026. https://news.zoom.com/ec26-zoom-workplace/
EU Artificial Intelligence Act, “Article 50: Transparency Obligations for Providers and Deployers of Certain AI Systems.” https://artificialintelligenceact.eu/article/50/
Herbert Smith Freehills Kramer, “Transparency obligations for AI-generated content under the EU AI Act: From principle to practice,” March 2026. https://www.hsfkramer.com/notes/ip/2026-03/transparency-obligations-for-ai-generated-content-under-the-eu-ai-act-from-principle-to-practice
Morrison Foerster, “Digital Avatars Deep Dive Series: Navigating the Legal and Regulatory Landscape in 2025,” September 2025. https://www.mofo.com/resources/insights/250922-digital-avatars-deep-dive-series-navigating
ComplianceHub, “Complete Guide to U.S. Deepfake Laws: 2025 State and Federal Compliance Landscape.” https://www.compliancehub.wiki/complete-guide-to-u-s-deepfake-laws-2025-state-and-federal-compliance-landscape/
MultiState, “How AI-Generated Content Laws Are Changing Across the Country,” February 2026. https://www.multistate.us/insider/2026/2/12/how-ai-generated-content-laws-are-changing-across-the-country
Congress.gov, “S.1396 – Content Origin Protection and Integrity from Edited and Deepfaked Media Act of 2025.” https://www.congress.gov/bill/119th-congress/senate-bill/1396/text
Otter.ai, “Otter.ai Caps Transformational 2025 with $100M ARR Milestone,” 2025. https://otter.ai/blog/otter-ai-caps-transformational-2025-with-100m-arr-milestone-industry-first-ai-meeting-agents-and-global-enterprise-expansion
Sensay, CEO Dan Thomson profile and company information. https://danthomson.ai/
Dagama World, “Sensay CEO Dan Thomson on Digital Identity and Nomadic Leadership.” https://www.dagama.world/blog/sensay-ceo-dan-thomson-on-digital-identity-and-nomadic-leadership
Luciano Floridi, “The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities,” Oxford University Press, 2023. https://global.oup.com/academic/product/the-ethics-of-artificial-intelligence-9780198883098
Luciano Floridi, “Artificial Intelligence, Deepfakes and a Future of Ectypes,” SSRN. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3834958
ULPA, “The Rise of AI in Japan: A Complete Guide for 2025.” https://www.ulpa.jp/post/the-rise-of-ai-in-japan-a-complete-guide-for-2025
World Economic Forum, “What Japan's path to responsible AI can teach us,” January 2026. https://www.weforum.org/stories/2026/01/japan-path-to-responsible-ai-and-what-it-can-teach-us/
Edelman, “The AI Trust Imperative: Navigating the Future with Confidence,” 2025 Trust Barometer. https://www.edelman.com/trust/2025/trust-barometer/report-tech-sector
Happily.ai, “The 2026 State of Workplace Trust: How Recognition Frequency Predicts Retention,” 2026. https://happily.ai/blog/state-of-workplace-trust-2026/
ArentFox Schiff, “The Business of AI Avatars: Key Legal Risks and Best Practices.” https://www.afslaw.com/perspectives/alerts/the-business-ai-avatars-key-legal-risks-and-best-practices
Traverse Legal, “AI Twins and Avatars: Legal Risks for Companies Using Synthetic Voice and Likeness Technology.” https://www.traverselegal.com/blog/ai-avatar-legal-risks/
GMO Research and AI, “Japan's Generative AI Market Penetration and Business Adoption Trends 2025.” https://gmo-research.ai/en/resources/studies/2025-study-gen-AI-jp
PwC, “Global Workforce Hopes and Fears Survey 2025.” https://www.pwc.com/gx/en/issues/workforce/hopes-and-fears.html
Microsoft 365 Blog, “Microsoft Ignite 2025: Copilot and agents built to power the Frontier Firm,” November 2025. https://www.microsoft.com/en-us/microsoft-365/blog/2025/11/18/microsoft-ignite-2025-copilot-and-agents-built-to-power-the-frontier-firm/
Otter.ai, “Having Generated $1 Billion+ Annual ROI for Customers, Otter.ai Aims for Complete Meeting Transformation.” https://otter.ai/blog/having-generated-1-billion-annual-roi-for-customers-otter-ai-aims-for-complete-meeting-transformation-by-launching-next-gen-enterprise-suite

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk