Virtual Stars, Real Losses: What AI Influencers Mean for Human Creators

She is 25 years old, lives in Barcelona, has pink hair, and earns up to ten thousand euros a month from brand partnerships with companies including Amazon and Razer. She has never eaten a meal, never taken a breath, and never existed outside the rendering pipelines of a creative agency called The Clueless. Her name is Aitana Lopez, and she represents something that should unsettle anyone who makes a living by being themselves on the internet.
Aitana is not an anomaly. She is a data point on an exponential curve. The global virtual influencer market, valued at approximately 6.06 billion US dollars in 2024 according to Grand View Research, is projected to reach 45.88 billion dollars by 2030, growing at a compound annual growth rate of 40.8 per cent. Chief marketing officers are expected to allocate 30 per cent of their influencer marketing budgets to virtual influencers by 2026. The synthetic faces are arriving, and they are arriving fast.
But here is the question that the marketing projections do not answer, the one that sits at the uncomfortable intersection of technology, psychology, and economics: if AI can create perfect, tireless digital influencers that never make mistakes or demand higher pay, what happens to human creators and the authentic human connection that audiences seek from content creators?
The answer, it turns out, is considerably more complicated than the headlines suggest.
Synthetic Stars and the Agencies Behind Them
The modern virtual influencer industry traces its most visible origins to 2016, when a Los Angeles startup called Brud introduced Lil Miquela to Instagram. Presented as a 19-year-old Brazilian-American model and musician, Miquela accumulated millions of followers and secured brand partnerships with Prada, Calvin Klein, Samsung, and dozens of other companies. TIME magazine named her one of the 25 most influential people on the internet in 2018, alongside BTS and Rihanna. Her creators at Brud attracted approximately 30 million dollars in investment from firms including Sequoia Capital and Spark Capital, earning a valuation of approximately 125 million dollars before being acquired by Dapper Labs in 2021. By some estimates, Lil Miquela has generated over 10 million dollars in revenue, charging around 10,000 dollars per sponsored Instagram post.
Then came Noonoouri, the animated avatar created by Munich-based graphic designer Joerg Zuber in 2018. With her exaggerated doll-like features and impossibly large eyes, Noonoouri made no pretence of being human. Yet Dior trusted her enough to grant an Instagram takeover for their Cruise Makeup collection. She went on to partner with Versace, Marc Jacobs, Burberry, Balenciaga, and Kim Kardashian's beauty and fashion lines. The modelling agency IMG, which has represented Gigi Hadid and Bella Hadid, signed her to their roster. In 2023, Warner Music gave Noonoouri a record deal, making her the first strictly digital popstar signed to a major label.
The economics that drive this expansion are bluntly logical. Ruben Cruz, founder of The Clueless and creator of Aitana Lopez, explained his rationale to Euronews in 2024. “We started analysing how we were working and realised that many projects were being put on hold or cancelled due to problems beyond our control. Often it was the fault of the influencer or model and not due to design issues.” The solution, in his view, was to build an influencer who would never be late, never cause a scandal, and never renegotiate her fee.
This calculus appeals to brands for reasons that extend beyond mere cost savings. Virtual influencers offer what marketing professionals call “brand safety” in its purest form. They cannot be photographed at a competitor's event. They will not post an ill-considered political opinion at three in the morning. They will not age out of their target demographic, gain weight, or develop substance abuse problems. They are, in the language of corporate risk management, perfectly controllable assets.
And the data suggests that audiences are, at least superficially, responding. According to HypeAuditor, virtual influencers generate engagement rates approximately three times higher than their human counterparts, with an average of 2.84 per cent compared to 1.72 per cent for human influencers. Virtual influencer campaigns in 2023 achieved an average engagement rate of 5.9 per cent, triple the 1.9 per cent recorded for campaigns featuring real people.
Yet this headline figure conceals a crucial nuance. When it comes to sponsored content specifically, human influencers achieve 2.7 times more engagement than their AI counterparts. Lil Miquela's BMW campaign, for instance, generated an average engagement rate of just 0.6 per cent, compared to the 3.6 per cent delivered by human creators working with the same brand. The implication is that whilst virtual influencers may attract curiosity, human creators still hold a measurable advantage when the goal is to convert attention into commercial action.
The Economics of Being Replaced
For the estimated 50 million people worldwide who consider themselves professional content creators, these numbers are not merely interesting. They are existential.
The creator economy in 2025, valued at approximately 191 billion dollars and projected to grow to 528 billion dollars by 2030, is simultaneously booming and fracturing. The total market is expanding, but the share available to individual human creators is under unprecedented pressure. A 93 per cent year-over-year increase in the number of people creating user-generated content has intensified competition at every level. Among full-time creators, 52 per cent report a noticeable decline in consumer spending on affiliate-linked products. Among part-time creators, 40 per cent cite falling brand commissions and fewer sponsorship opportunities.
The threat is not limited to influencers in the traditional sense. A landmark study published in December 2024 by CISAC, the International Confederation of Societies of Authors and Composers representing over five million creators globally, provided the first comprehensive economic modelling of generative AI's impact on creative professions. Conducted by PMP Strategy, the study projected that music creators will see 24 per cent of their revenues at risk of loss by 2028, whilst audiovisual creators face a 21 per cent revenue risk over the same period. The cumulative financial impact amounts to an estimated 22 billion euros over five years: 10 billion euros in music and 12 billion euros in audiovisual production.
The study found that the market for AI-generated music and audiovisual content is expected to grow from approximately 3 billion euros to 64 billion euros by 2028. Generative AI music alone is projected to account for roughly 20 per cent of traditional music streaming platforms' revenues and around 60 per cent of music libraries' revenues by that date. Translators and adaptors for dubbing and subtitling face the most severe displacement, with 56 per cent of their revenue at risk, whilst screenwriters and directors could see their income cannibalised by 15 to 20 per cent.
Perhaps most strikingly, the CISAC study noted that not a single AI developer had signed a licensing agreement with any of the 225 collective management organisations that represent creators worldwide. The value transfer, in other words, is flowing in one direction: from human creators to the technology companies building the systems that will compete with them.
For content creators operating on platforms like Instagram, TikTok, and YouTube, the competitive dynamics are subtler but no less consequential. AI-generated content can be produced at a fraction of the cost and at a pace that no human can match. A virtual influencer does not need sleep, does not require a production crew, and can generate dozens of posts per day across multiple platforms simultaneously. The marginal cost of producing one more piece of content approaches zero. For a human creator who must plan, shoot, edit, and publish content whilst also managing brand relationships, responding to comments, and maintaining some semblance of a personal life, the asymmetry is stark.
What Audiences Actually Want (and What They Say They Want)
The conventional wisdom holds that audiences will always prefer “authentic” human connection over synthetic perfection. The research suggests this is true, but with caveats that should concern anyone who relies on conventional wisdom for comfort.
A 2025 survey from the Influencer Marketing Factory found that only 15 per cent of consumers express high trust in AI influencers, whilst nearly half say they are less likely to trust content from a virtual influencer compared to a human one. In a study conducted by Baringa, 77 per cent of respondents said they would want to know if content had been created by AI, either wholly or partially. Only 12 per cent said they would not care.
The trust penalty for AI-generated content is measurable and consistent across studies. Research from the Nuremberg Institute for Market Decisions published in 2025 found that simply labelling an advertisement as AI-generated made consumers perceive it as less natural and less useful, which lowered both their attitudes towards the advertisement and their willingness to research or purchase the product. Approximately 62 per cent of consumers reported being less likely to engage with or trust social media content they knew was generated by AI.
And yet, the picture is not quite so simple. The same research ecosystem reveals contradictions that complicate the “authenticity always wins” narrative. A study of TikTok users in the Middle East published in Discover Sustainability found that AI influencers can establish meaningful emotional bonds and credibility, sometimes outperforming human influencers in generating community cohesion and network expansion. Research published in Psychology and Marketing found that followers respond to virtual influencers in ways that mirror their responses to human creators, with engagement rates and measures of trust and source credibility that rival those of their flesh-and-blood counterparts.
The generational divide is particularly telling. Virtual influencers appeal more to Gen Z consumers, who have grown up immersed in AI-enabled technologies and may not share older generations' preoccupation with the notion of authenticity as it has traditionally been understood. When the distinction between “real” and “constructed” has been blurred since childhood (by filters, by avatars, by curated social media personas that bear only passing resemblance to the people behind them), the arrival of an explicitly artificial influencer may feel less like a violation and more like an honest acknowledgement of what social media has always been.
There is a deeper irony here. The “authenticity” that human influencers claim as their competitive advantage has always been, to a significant degree, performative. The casual photograph that required forty takes. The “unfiltered” video that was carefully scripted. The “honest review” that was contractually obligated to include three positive talking points about the sponsoring brand. If authenticity is already a construction, does it matter whether the constructor is carbon-based or silicon-based?
The answer, according to the psychology of parasocial relationships, is more nuanced than a simple yes or no.
The Strange Intimacy of Parasocial Bonds
Parasocial relationships, the one-sided emotional connections that audiences form with media figures, have been studied by psychologists since Donald Horton and Richard Wohl first described the phenomenon in 1956. Originally applied to television presenters and film stars, the concept has found renewed relevance in the age of social media, where the perceived intimacy between creator and audience is amplified by direct messaging, live streaming, and the illusion of personal access.
The question of whether parasocial relationships can form with virtual influencers has been the subject of intense academic investigation. A preregistered experiment published in New Media and Society by Stein, Breves, and Anders in 2024 found that viewers' parasocial interactions did not differ significantly between a human influencer and a virtual one. However, the researchers identified what they called “opposing effects”: whilst a direct effect suggested stronger parasocial interactions with the virtual influencer, participants simultaneously attributed this persona with less mental human-likeness and less perceived similarity to themselves. These two forces partially cancelled each other out.
Research published in the Journal of Business Research by Liu and Wang in 2025 added another layer of complexity through the lens of the uncanny valley. Studying 826 Instagram users, they found that as virtual influencers become more humanlike, they often trigger psychological unease and eeriness. This discomfort intensifies when consumers deeply engage with virtual influencer content whilst remaining aware of its artificial nature, potentially diminishing the strength of parasocial relationships at precisely the moment when the technology becomes most convincing.
The implications are paradoxical. Virtual influencers that look obviously artificial (like Noonoouri, with her cartoonish proportions) may actually generate stronger parasocial bonds than those designed for photorealism, because they do not trigger the uncanny valley response. The more a virtual influencer tries to pass as human, the more its artificiality may repel the audience it seeks to attract.
But there is a counter-trend that complicates this analysis. Newer AI systems are becoming sophisticated enough to generate personalised responses, to adapt their communication styles based on audience feedback, and to create the impression of genuine emotional reciprocity. Research on AI influencer marketing has explored the potential of what scholars call “Dynamic Emotional Resonance” and “AI-Driven Attachment Styles,” whereby artificial systems learn to mirror the emotional patterns that foster deep parasocial bonds. If an AI influencer can respond to your comment in a way that feels personally meaningful, if it can remember your previous interactions and reference them naturally, if it can adapt its tone and content to your individual preferences, the distinction between “real” and “artificial” connection becomes increasingly difficult to maintain.
This is where the question stops being about marketing and starts being about something more fundamental. If the feeling of connection is indistinguishable from genuine connection, does the distinction matter? The answer depends entirely on what you believe connection is for.
The Algorithm's Thumb on the Scale
The competitive landscape between human and AI creators is not shaped solely by audience preferences. It is shaped, to a significant degree, by the platforms themselves and the algorithms that determine what content gets seen.
Research from Cornell University by Brooke Erin Duffy and Colten Meisner, based on interviews with 30 creators across TikTok, Instagram, Twitch, YouTube, and Twitter, found that creators invest significant labour in understanding the algorithms that govern their visibility. Because many creators operate across multiple platforms, they must learn the hidden rules for each one and adapt their entire approach to content production accordingly. The algorithms are not neutral arbiters; they are designed to maximise engagement, and they reward content that keeps users on the platform regardless of whether that content was produced by a human being or a rendering engine.
TikTok's algorithm, in particular, is designed for what engineers call “cold start” optimisation: it tests new content with small groups of users and, if those users engage, pushes it to progressively larger audiences. This design theoretically levels the playing field between established creators and newcomers. But it also means that content which is optimised for algorithmic engagement (consistent posting frequency, precise timing, trending audio, specific visual patterns) has an inherent advantage over content that prioritises the messy, unpredictable qualities that make human creators distinctive.
AI-generated content, by its nature, can be optimised for algorithmic preference with a precision that human creators cannot match. It can be produced at the exact frequency, length, and format that the algorithm rewards. It can incorporate trending elements within minutes of their emergence. It can A/B test variations of the same content simultaneously, learning in real time which version generates the most engagement. The algorithm does not care whether the content was made by a person or a process. It cares about watch time, completion rates, shares, and comments.
This creates a structural disadvantage for human creators that exists independently of audience preferences. Even if audiences prefer human-created content when given a clear choice, they may never be given that choice if the algorithm surfaces AI-generated content more frequently because it performs better on the metrics that platforms optimise for.
The research on algorithmic bias adds another dimension. Marc Faddoul, an AI researcher at UC Berkeley's School of Information, demonstrated that TikTok's recommendation algorithm would suggest accounts with profile pictures matching the same race, age, and facial characteristics as accounts a user already followed. The algorithm creates feedback loops in which certain types of content (and certain types of faces) are amplified whilst others are suppressed. If AI-generated influencers are designed to embody the physical characteristics that algorithms have historically amplified (conventionally attractive, often white, always polished), they may receive a structural boost that compounds their other advantages.
Regulation in the Age of Synthetic Persuasion
Regulators are beginning to grapple with the implications, though the pace of regulatory development lags considerably behind the pace of technological deployment.
In the United States, the Federal Trade Commission updated its Endorsement Guides in June 2023 to explicitly cover virtual influencers and AI-generated content. The FTC's position is that AI-generated personas must follow the same disclosure rules as human endorsers. If a virtual influencer promotes a product, both the sponsorship and the involvement of AI must be disclosed. In June 2025, the FTC proposed rules recasting any marketing mention (including promotional codes, affiliate links, and brand tags) as paid endorsements requiring “clear and conspicuous” disclosure. The FTC put 670 companies on notice in 2023 alone, and enforcement actions in 2024 resulted in 337.3 million dollars being returned to consumers.
The European Union has gone further. Article 50 of the EU AI Act, now in phased enforcement, establishes what are arguably the world's most explicit transparency obligations for synthetic media. Providers and users of AI systems that generate or substantially manipulate images, audio, or video must ensure that such content is clearly identifiable as artificial. Violations can attract penalties of up to 15 million euros or 3 per cent of global annual turnover, whichever is higher. Full compliance is expected by August 2026.
In the United Kingdom, the Advertising Standards Authority issued guidance in May 2025 clarifying that AI use in advertising must be disclosed when it could mislead consumers about authenticity or performance. At the state level in the United States, California passed AB 2655 in 2024, requiring large online platforms to label or remove deceptive AI content, and AB 1836, which mandates disclosure and consent when AI recreates a person's image or voice for commercial use.
These regulatory frameworks address transparency but do not address the underlying competitive dynamics. Requiring disclosure that an influencer is AI-generated does not prevent that influencer from capturing market share from human creators. In fact, some research suggests that disclosure might have limited impact on purchasing behaviour. A study from the Influencer Marketing Factory found that 76 per cent of consumers trust AI influencers for product recommendations, even when they know the influencer is not human. The trust penalty, whilst real, may not be large enough to offset the cost and consistency advantages that AI influencers offer to brands.
There is also a growing concern about what happens when regulation cannot keep pace. The distinction between “AI-generated” and “AI-assisted” content is already blurring. If a human creator uses AI tools to write scripts, generate images, edit video, and optimise posting schedules, is the resulting content “human” or “AI”? Where is the line? And who draws it?
The Authenticity Premium and Its Limits
Despite the competitive pressures, there are reasons to believe that human creators will not be rendered obsolete. The research consistently identifies what might be called an “authenticity premium,” a measurable preference for human-created content that persists even as AI capabilities improve.
Getty Images reported in 2025 that nearly 90 per cent of consumers want transparency about whether images are AI-generated, and 98 per cent agree that authentic images and videos are pivotal in establishing trust. AI content that includes human strategic oversight performs 4.1 times better than fully automated output, according to industry benchmarking data. And 73 per cent of marketers who use AI employ a hybrid approach in which human editors polish AI-generated drafts rather than publishing them unmodified.
The share of consumers who view generative AI as a negative disruptor in the creator economy has nearly doubled since November 2023, jumping from 18 per cent to 32 per cent according to a July 2025 survey from Billion Dollar Boy and Censuswide. Half of surveyed consumers can now correctly identify AI-generated content, and when they do, approximately 52 per cent report reduced engagement. There is a further complication: despite growing confidence in their ability to spot AI content, consumers are remarkably poor at actually doing so. Research found that participants correctly identified AI-generated images only 31 per cent of the time in 2025, a figure worse than a coin toss, even as 43 per cent rated themselves as “very” or “fairly” confident in their detection abilities.
These numbers suggest that the market for genuine human connection is not disappearing. It may, however, be restructuring. The most likely outcome is not a binary replacement of human creators by AI counterparts, but rather a stratification. Premium human creators with distinctive voices, genuine expertise, and documented lived experience will command an authenticity premium that AI cannot replicate. Meanwhile, the vast middle tier of content creators producing generic lifestyle, beauty, fitness, and product review content will face the most severe competitive pressure from AI alternatives that can produce similar material at lower cost and higher volume.
This stratification carries uncomfortable implications for equity and access. The creator economy has been, for all its flaws, a pathway to economic independence for people who lacked access to traditional media gatekeepers. If AI competition pushes out the middle tier whilst preserving the top, it reinforces existing hierarchies rather than disrupting them. The creators most vulnerable to AI displacement are likely to be those who are already marginalised: creators in developing markets, creators from underrepresented communities, creators who lack the resources to invest in the production quality and personal branding required to compete at the premium tier.
Synthetic Perfection and What We Lose
Beyond economics, the rise of AI influencers raises cultural questions that resist quantification. What does it mean for a generation to form their deepest parasocial bonds with entities that have no inner life? What happens to our collective understanding of beauty when the most visible “people” on our screens are designed to embody algorithmically optimised physical ideals? What is lost when imperfection, the quality that makes human connection meaningful, is engineered out of the media landscape?
The criticism levelled at Aitana Lopez is instructive. Critics have noted that her hyper-polished body reinforces unrealistic beauty standards, particularly for young audiences. Others have pointed out that AI influencers risk displacing real creators, especially women who rely on appearance-based income. These concerns echo decades of feminist critique about media representation, but with a new dimension: the standards are now set not by retouched photographs of real people but by entirely fabricated beings who were never imperfect to begin with.
There is something philosophically disquieting about a media ecosystem in which the most influential voices belong to entities that have never experienced the conditions they discuss. An AI fitness influencer that has never felt the burn of a difficult workout. An AI travel influencer that has never been lost in a foreign city. An AI wellness influencer that has never struggled with mental health. The content may be technically competent, even engaging. But it is hollow in a way that matters, because the authority of a creator has always derived, at least in part, from the credibility of lived experience.
Checkr's 2025 consumer trust report captured a sentiment that may define the coming era. When asked what scares them most about AI-generated content, 39 per cent of Americans said their primary fear is simply not knowing what is real anymore, whether in news, photographs, or video. This concern outranked fears about financial scams, identity theft, and political manipulation. The erosion of shared reality is, for a significant portion of the population, the most troubling consequence of synthetic media's ascendance.
A Landscape in Tension
The future of the creator economy will not be determined by technology alone. It will be shaped by the choices that platforms, regulators, brands, and audiences make in the next several years.
Platforms could choose to label AI-generated content prominently and adjust their algorithms to ensure that human creators are not structurally disadvantaged. They could create separate categories for virtual and human influencers, giving audiences the information they need to make informed choices about the content they consume. Whether they will make these choices, given that AI content tends to drive higher engagement metrics, is another matter entirely.
Regulators could move beyond transparency requirements to establish substantive protections for human creators. The CISAC study's recommendation that policymakers act urgently to safeguard human creators, ensure they can exercise their legal rights, and demand transparency from AI services represents one possible direction. But regulation that restricts AI deployment risks being characterised as anti-innovation, and in the current political climate of many major markets, that is not a label that legislators are eager to attract.
Brands, for their part, will follow the data. If AI influencers deliver comparable or superior return on investment at lower cost, the economic logic of shifting budgets towards synthetic creators is compelling. The 52.8 per cent of marketers who believe virtual influencers will significantly shape the future of marketing are not making a prediction about technology. They are making a prediction about their own spending decisions.
And audiences, the supposed arbiters of authenticity, will continue to send mixed signals. They will say they prefer human creators whilst engaging enthusiastically with AI-generated content. They will demand transparency about AI involvement whilst following virtual influencers in growing numbers. They will express concern about the erosion of authenticity whilst applying filters to their own photographs and curating their own online personas with meticulous care.
The tension between human imperfection and synthetic perfection is not new. It is the latest iteration of a conflict that has accompanied every major media technology, from the airbrush to Photoshop to Instagram filters. What is new is the scale, the speed, and the degree to which the technology threatens not just to supplement human creators but to make certain categories of human creation economically unviable.
The creators who will thrive in this landscape are those who offer something that cannot be replicated by an algorithm: genuine vulnerability, hard-won expertise, the willingness to be wrong in public, the capacity to change their minds, the evidence of a life actually lived. These qualities have always been the foundation of the most compelling creative work. The arrival of AI influencers does not diminish their value. If anything, it clarifies it.
The question is whether the economic structures of the creator economy will continue to reward those qualities, or whether the relentless logic of cost optimisation, algorithmic preference, and synthetic perfection will squeeze them to the margins.
We are about to find out.
References and Sources
Grand View Research. “Virtual Influencer Market Size and Share, Industry Report, 2030.” Available at: https://www.grandviewresearch.com/industry-analysis/virtual-influencer-market-report
Straits Research. “Virtual Influencer Market Size, Growth and Demand Forecast by 2033.” Available at: https://straitsresearch.com/report/virtual-influencer-market
Euronews. “Meet the Spanish AI model earning up to 10,000 euros a month.” December 2024. Available at: https://www.euronews.com/next/2024/12/27/meet-the-first-spanish-ai-model-earning-up-to-10000-per-month
Supercar Blondie. “AI influencer Lil Miquela charges 10,000 dollars per Instagram post.” Available at: https://supercarblondie.com/ai-influencer-lil-miquela/
Virtual Humans. “Who is Miquela Sousa?” Available at: https://www.virtualhumans.org/human/miquela-sousa
Hypebeast. “Warner Music Signs Record Deal With AI-Generated Popstar, Noonoouri.” September 2023. Available at: https://hypebeast.com/2023/9/warner-music-signs-record-deal-ai-generated-popstar-noonoouri-artificial-intelligence
Virtual Humans. “Noonoouri: Fashion Icon Turned Pop Star.” Available at: https://www.virtualhumans.org/article/noonoouri-fashion-icon-turned-pop-star
CISAC. “Global economic study shows human creators' future at risk from generative AI.” December 2024. Available at: https://www.cisac.org/Newsroom/news-releases/global-economic-study-shows-human-creators-future-risk-generative-ai
Music Business Worldwide. “Market for Gen AI outputs to be worth over 16 billion euros annually by 2028.” December 2024. Available at: https://www.musicbusinessworldwide.com/market-for-gen-ai-outputs-to-be-worth-over-16bn-annually-by-2028-but-it-could-cannibalize-24-of-music-creators-revenues-cisac-predicts/
WebProNews. “2025 Creator Economy Booms to 191 Billion Amid AI Threats and Ethical Challenges.” Available at: https://www.webpronews.com/2025-creator-economy-booms-to-191b-amid-ai-threats-and-ethical-challenges/
Stein, J-P., Breves, P. L., and Anders, N. “Parasocial interactions with real and virtual influencers: The role of perceived similarity and human-likeness.” New Media and Society, 2024. Available at: https://journals.sagepub.com/doi/10.1177/14614448221102900
Liu and Wang. “Fostering Parasocial Relationships with Virtual Influencers in the Uncanny Valley: Anthropomorphism, Autonomy, and a Multigroup Comparison.” Journal of Business Research, 2025. Available at: https://www.sciencedirect.com/science/article/abs/pii/S0148296324005289
Nuremberg Institute for Market Decisions. “Consumer attitudes toward AI-generated marketing content.” 2025. Available at: https://www.nim.org/en/publications/detail/transparency-without-trust
Baringa. “Trust: transparency earns trust.” 2025. Available at: https://www.baringa.com/en/insights/balancing-human-tech-ai/trust/
Influencer Marketing Factory. “Virtual Influencers Survey.” Available at: https://theinfluencermarketingfactory.com/virtual-influencers-survey-infographic/
The European Commission. “Code of Practice on marking and labelling of AI-generated content.” Available at: https://digital-strategy.ec.europa.eu/en/policies/code-practice-ai-generated-content
FTC. “FTC Guidelines for Influencers.” Updated 2025. Available at: https://inbeat.agency/blog/ftc-guidelines-for-influencers
Checkr. “America's Consumer Trust Crisis in the AI Era.” 2025. Available at: https://checkr.com/resources/articles/the-great-untrust-consumer-report-2025
Getty Images. “Nearly 90 per cent of Consumers Want Transparency on AI Images.” 2025. Available at: https://newsroom.gettyimages.com/en/getty-images/nearly-90-of-consumers-want-transparency-on-ai-images-finds-getty-images-report
SmythOS. “The AI Content Trust Gap: Why 73 per cent of Consumers Can Spot and Reject AI-Generated Marketing.” Available at: https://smythos.com/thought-leadership/the-ai-content-trust-gap-why-73-of-consumers-can-spot-and-reject-ai-generated-marketing/
Hello Partner. “76 per cent of Consumers Trust AI Influencers for Products.” November 2025. Available at: https://hellopartner.com/2025/11/14/76-of-consumers-trust-ai-influencers-for-products-should-creators-be-worried/
Faddoul, M. UC Berkeley School of Information. Research on TikTok algorithmic bias. Available at: https://www.ischool.berkeley.edu/news/2020/alumnus-marc-faddoul-discovers-racial-biases-tiktoks-algorithm
Duffy, B. E. and Meisner, C. Cornell University. Research on creator experiences with platform algorithms. Referenced in MIT Technology Review, 2022. Available at: https://www.technologyreview.com/2022/07/14/1055906/tiktok-influencers-moderation-bias/
Emarketer. “Consumer skepticism of AI in the creator economy is surging.” 2025. Available at: https://www.emarketer.com/content/consumer-skepticism-of-ai-creator-economy-surging
Euronews. “A quarter of musician revenue to be lost to AI by 2028, new study finds.” December 2024. Available at: https://www.euronews.com/culture/2024/12/05/a-quarter-of-musician-revenue-to-be-lost-to-ai-by-2028-new-study-finds

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk