No Consent, No Credit, No Pay: The AI Art Governance Failure

When Sarah Andersen, Kelly McKernan, and Karla Ortiz filed their copyright infringement lawsuit against Stability AI and Midjourney in January 2023, they raised a question that now defines one of the most contentious debates in technology: can AI image generation's creative potential be reconciled with artists' rights and market sustainability? More than two years later, that question remains largely unanswered, but the outlines of potential solutions are beginning to emerge through experimental licensing frameworks, technical standards, and a rapidly shifting platform landscape.
The scale of what's at stake is difficult to overstate. Stability AI's models were trained on LAION-5B, a dataset containing 5.85 billion images scraped from the internet. Most of those images were created by human artists who never consented to their work being used as training data, never received attribution, and certainly never saw compensation. At a U.S. Senate hearing, Karla Ortiz testified with stark clarity: “I have never been asked. I have never been credited. I have never been compensated one penny, and that's for the use of almost the entirety of my work, both personal and commercial, senator.”
This isn't merely a legal question about copyright infringement. It's a governance crisis that demands we design new institutional frameworks capable of balancing competing interests: the technological potential of generative AI, the economic livelihoods of millions of creative workers, and the sustainability of markets that depend on human creativity. Three distinct threads have emerged in response. First, experimental licensing and compensation models that attempt to establish consent-based frameworks for AI training. Second, technical standards for attribution and provenance that make the origins of digital content visible. Third, a dramatic migration of creator communities away from platforms that embraced AI without meaningful consent mechanisms.
Experiments in Consent and Compensation
The most direct approach to reconciling AI development with artists' rights is to establish licensing frameworks that require consent and provide compensation for the use of copyrighted works in training datasets.
Getty Images' partnership with Nvidia represents the most comprehensive attempt to build such a model. Rather than training on publicly scraped data, Getty developed its generative AI tool exclusively on its licensed creative library of approximately 200 million images. Contributors are compensated through a revenue-sharing model that pays them “for the life of the product”, not as a one-time fee, but as a percentage of revenue “into eternity”. On an annual recurring basis, the company shares revenues generated from the tool with contributors whose content was used to train the AI generator.
This Spotify-style compensation model addresses several concerns simultaneously. It establishes consent by only using content from photographers who have already agreed to licence their work to Getty. It provides ongoing compensation that scales with the commercial success of the AI tool. And it offers legal protection, with Getty providing up to £50,000 in legal coverage per image and uncapped indemnification as part of enterprise solutions.
The limitations are equally clear. It only works within a closed ecosystem where Getty controls both the training data and the commercial distribution. Most artists don't licence their work through Getty, and the model provides no mechanism for compensating creators whose work appears in open datasets like LAION-5B.
A different approach has emerged in the music industry. In Sweden, STIM (the Swedish music rights society) launched what it describes as the world's first collective AI licence for music. The framework allows AI companies to train their systems on copyrighted music lawfully, with royalties flowing back to the original songwriters both through model training and through downstream consumption of AI outputs.
STIM's Acting CEO Lina Heyman described this as “establishing a scalable, democratic model for the industry”, one that “embraces disruption without undermining human creativity”. GEMA, a German performing rights collection society, has proposed a similar model that explicitly rejects one-off lump sum payments for training data, arguing that “such one-off payments may not sufficiently compensate authors given the potential revenues from AI-generated content”.
These collective licensing approaches draw on decades of experience from the music industry, where performance rights organisations have successfully managed complex licensing across millions of works. The advantage is scalability: rather than requiring individual negotiations between AI companies and millions of artists, a collective licensing organisation can offer blanket permissions covering large repertoires.
Yet collective licensing faces obstacles. Unlike music, where performance rights organisations have legal standing and well-established royalty collection mechanisms, visual arts have no equivalent infrastructure. And critically, these systems only work if AI companies choose to participate. Without legal requirements forcing licensing, companies can simply continue training on publicly scraped data.
The consent problem runs deeper than licensing alone. In 2017, Monica Boța-Moisin coined the phrase “the 3 Cs” in the context of protecting Indigenous People's cultural property: consent, credit, and compensation. This framework has more recently emerged as a rallying cry for creative workers responding to generative AI. But as researchers have noted, the 3 Cs “are not yet a concrete framework in the sense of an objectively implementable technical standard”. They represent aspirational principles rather than functioning governance mechanisms.
Regional Governance Divergence
The lack of global consensus has produced three distinct regional approaches to AI training data governance, each reflecting different assumptions about the balance between innovation and rights protection.
The United States has taken what researchers describe as a “market-driven” approach, where private companies through their practices and internal frameworks set de facto standards. No specific law regulates the use of copyrighted material for training AI models. Instead, the issue is being litigated in lawsuits that pit content creators against the creators of generative AI tools.
In August 2024, U.S. District Judge William Orrick of California issued a significant ruling in the Andersen v. Stability AI case. He found that the artists had reasonably argued that the companies violate their rights by illegally storing work and that Stable Diffusion may have been built “to a significant extent on copyrighted works” and was “created to facilitate that infringement by design”. The judge denied Stability AI and Midjourney's motion to dismiss the artists' copyright infringement claims, allowing the case to move towards discovery.
This ruling suggests that American courts may not accept blanket fair use claims for AI training, but the legal landscape remains unsettled. Yet without legislation, the governance framework will emerge piecemeal through court decisions, creating uncertainty for both AI companies and artists.
The European Union has taken a “rights-focused” approach, creating opt-out mechanisms for copyright owners to remove their works from text and data mining purposes. The EU AI Act explicitly declares text and data mining exceptions to be applicable to general-purpose AI models, but with critical limitations. If rights have been explicitly reserved through an appropriate opt-out mechanism (by machine-readable means for online content), developers of AI models must obtain authorisation from rights holders.
Under Article 53(1)© of the AI Act, providers must establish a copyright policy including state-of-the-art technologies to identify and comply with possible opt-out reservations. Additionally, providers must “draw up and make publicly available a sufficiently detailed summary about the content used for training of the general-purpose AI model”.
However, the practical implementation has proven problematic. As legal scholars note, “you have to have some way to know that your image was or will be actually used in training”. The ECSA's secretary general told Euronews that “the work of our members should not be used without transparency, consent, and remuneration, and we see that the implementation of the AI Act does not give us” these protections.
Japan has pursued perhaps the most permissive approach. Article 30-4 of Japan's revised Copyright Act, which came into effect on 1 January 2019, allows broad rights to ingest and use copyrighted works for any type of information analysis, including training AI models, even for commercial use. Collection of copyrighted works as AI training data is permitted without permission of the copyright holder, provided the use doesn't cause unreasonable harm.
The rationale reflects national priorities: AI is seen as a potential solution to a swiftly ageing population, and with no major local Japanese AI providers, the government implemented a flexible AI approach to quickly develop capabilities. However, this has generated increasing pushback from Japan-based content creators, particularly developers of manga and anime.
The United Kingdom is currently navigating between these approaches. On 17 December 2024, the UK Government announced its public consultation on “Copyright and Artificial Intelligence”, proposing an EU-style broad text and data mining exception for any purpose, including commercial, but only where the party has “lawful access” and the rightholder hasn't opted out. A petition signed by more than 37,500 people, including actors and celebrities, condemned the proposals as a “major and unfair threat” to creators' livelihoods.
What emerges from this regional divergence is not a unified governance framework but a fragmented landscape where “the world is splintering”, as one legal analysis put it. AI companies operating globally must navigate different rules in different jurisdictions, and artists have vastly different levels of protection depending on where they and the AI companies are located.
The C2PA and Content Credentials
Whilst licensing frameworks and legal regulations attempt to govern the input side of AI image generation (what goes into training datasets), technical standards are emerging to address the output side: making the origins and history of digital content visible and verifiable.
The Coalition for Content Provenance and Authenticity (C2PA) is a formal coalition dedicated to addressing the prevalence of misleading information online through the development of technical standards for certifying the source and history of media content. Formed through an alliance between Adobe, Arm, Intel, Microsoft, and Truepic, collaborators include the Associated Press, BBC, The New York Times, Reuters, Leica, Nikon, Canon, and Qualcomm.
Content Credentials provide cryptographically secure metadata that captures content provenance from the moment it is created through all subsequent modifications. They function as “a nutrition label for digital content”, containing information about who produced a piece of content, when they produced it, and which tools and editing processes they used. When an action was performed by an AI or machine learning system, it is clearly identified as such.
OpenAI now includes C2PA metadata in images generated with ChatGPT and DALL-E 3. Google collaborated on version 2.1 of the technical standard, which is more secure against tampering attacks. Microsoft Azure OpenAI includes Content Credentials in all AI-generated images.
The security model is robust: faking Content Credentials would require breaking current cryptographic standards, an infeasible task with today's technology. However, metadata can be easily removed either accidentally or intentionally. To address this, C2PA supports durable credentials via soft bindings such as invisible watermarking that can help rediscover the associated Content Credential even if it's removed from the file.
Critically, the core C2PA specification does not support attribution of content to individuals or organisations, so that it can remain maximally privacy-preserving. However, creators can choose to attach attribution information directly to their assets.
For artists concerned about AI training, C2PA offers partial solutions. It can make AI-generated images identifiable, potentially reducing confusion about whether a work was created by a human artist or an AI system. It cannot, however, prevent AI companies from training on human-created images, nor does it provide any mechanism for consent or compensation. It's a transparency tool, not a rights management tool.
Glaze, Nightshade, and the Resistance
Frustrated by the lack of effective governance frameworks, some artists have turned to defensive technologies that attempt to protect their work at the technical level.
Glaze and Nightshade, developed by researchers at the University of Chicago, represent two complementary approaches. Glaze is a defensive tool that individual artists can use to protect themselves against style mimicry attacks. It works by making subtle changes to images invisible to the human eye but which cause AI models to misinterpret the artistic style.
Nightshade takes a more aggressive approach: it's a data poisoning tool that artists can use as a group to disrupt models that scrape their images without consent. By introducing carefully crafted perturbations into images, Nightshade causes AI models trained on those images to learn incorrect associations.
The adoption statistics are striking. Glaze has been downloaded more than 8.5 million times since its launch in March 2023. Nightshade has been downloaded more than 2.5 million times since January 2024. Glaze has been integrated into Cara, a popular art platform, allowing artists to embed protection in their work when they upload images.
Shawn Shan, the lead developer, was named MIT Technology Review Innovator of the Year for 2024, reflecting the significance the artistic community places on tools that offer some degree of protection in the absence of effective legal frameworks.
Yet defensive technologies face inherent limitations. They require artists to proactively protect their work before posting it online, placing the burden of protection on individual creators rather than on AI companies. They're engaged in an arms race: as defensive techniques evolve, AI companies can develop countermeasures. And they do nothing to address the billions of images already scraped and incorporated into existing training datasets. Glaze and Nightshade are symptoms of a governance failure, tactical responses to a strategic problem that requires institutional solutions.
Spawning and Have I Been Trained
Between defensive technologies and legal frameworks sits another approach: opt-out infrastructure that attempts to create a consent layer for AI training.
Spawning AI created Have I Been Trained, a website that allows creators to opt out of the training dataset for art-generating AI models like Stable Diffusion. The website searches the LAION-5B training dataset, a library of 5.85 billion images used to feed Stable Diffusion and Google's Imagen.
Since launching opt-outs in December 2022, Spawning has helped thousands of individual artists and organisations remove 78 million artworks from AI training. By late April, that figure had exceeded 1 billion. Spawning partnered with ArtStation to ensure opt-out requests made on their site are honoured, and partnered with Shutterstock to opt out all images posted to their platforms by default.
Critically, Stability AI promised to respect opt-outs in Spawning's Do Not Train Registry for training of Stable Diffusion 3. This represents a voluntary commitment rather than a legal requirement, but it demonstrates that opt-out infrastructure can work when AI companies choose to participate.
However, the opt-out model faces fundamental problems: it places the burden on artists to discover their work is being used and to actively request removal. It works retrospectively rather than prospectively. And it only functions if AI companies voluntarily respect opt-out requests.
The infrastructure challenge is enormous. An artist must somehow discover that their work appears in a training dataset, navigate to the opt-out system, verify their ownership, submit the request, and hope that AI companies honour it. For the millions of artists whose work appears in LAION-5B, this represents an impossible administrative burden. The default should arguably be opt-in rather than opt-out: work should only be included in training datasets with explicit artist permission.
The Platform Migration Crisis
Whilst lawyers debate frameworks and technologists build tools, a more immediate crisis has been unfolding: artist communities are fracturing across platform boundaries in response to AI policies.
The most dramatic migration occurred in early June 2024, when Meta announced that starting 26 June 2024, photos, art, posts, and even post captions on Facebook and Instagram would be used to train Meta's AI chatbots. The company offered no opt-out mechanism for users in the United States. The reaction was immediate and severe.
Cara, an explicitly anti-AI art platform founded by Singaporean photographer Jingna Zhang, became the primary destination for the exodus. In around seven days, Cara went from having 40,000 users to 700,000, eventually reaching close to 800,000 users at its peak. In the first days of June 2024, the Cara app recorded approximately 314,000 downloads across the Apple App Store and Google Play Store, compared to 49,410 downloads in May 2024. The surge landed Cara in the Top 5 of Apple's US App Store.
Cara explicitly bans AI-generated images and uses detection technology from AI company Hive to identify and remove rule-breakers. Each uploaded image is tagged with a “NoAI” label to discourage scraping. The platform integrates Glaze, allowing artists to automatically protect their work when uploading. This combination of policy (banning AI art), technical protection (Glaze integration), and community values (explicitly supporting human artists) created a platform aligned with artist concerns in ways Instagram was not.
The infrastructure challenges were severe. Server costs jumped from £2,000 to £13,500 in a week. The platform is run entirely by volunteers who pay for the platform to keep running out of their own pockets. This highlights a critical tension in platform migration: the platforms most aligned with artist values often lack the resources and infrastructure of the corporate platforms artists are fleeing.
DeviantArt faced a similar exodus following its launch of DreamUp, an artificial intelligence image-generation tool based on Stable Diffusion, in November 2022. The release led to DeviantArt's inclusion in the copyright infringement lawsuit alongside Stability AI and Midjourney. Artist frustrations include “AI art everywhere, low activity unless you're amongst the lucky few with thousands of followers, and paid memberships required just to properly protect your work”.
ArtStation, owned by Epic Games, took a different approach. The platform allows users to tag their projects with “NoAI” if they would like their content to be prohibited from use in datasets utilised by generative AI programs. This tag is not applied by default; users must actively designate their projects. This opt-out approach has been more acceptable to many artists than platforms that offer no protection mechanisms at all, though it still places the burden on individual creators.
Traffic data from November 2024 shows DeviantArt.com had more total visits compared to ArtStation.com, with DeviantArt holding a global rank of #258 whilst ArtStation ranks #2,902. Most professional artists maintain accounts on multiple platforms, with the general recommendation being to focus on ArtStation for professional work whilst staying on DeviantArt for discussions and relationships.
This platform fragmentation reveals how AI policies are fundamentally reshaping the geography of creative communities. Rather than a unified ecosystem, artists now navigate a fractured landscape where different platforms offer different levels of protection, serve different community norms, and align with different values around AI. The migration isn't simply about features or user experience; it's about alignment on fundamental questions of consent, compensation, and the role of human creativity in an age of generative AI.
The broader creator economy shows similar tensions. In December 2024, more than 500 people in the entertainment industry signed a letter launching the Creators Coalition on AI, an organisation addressing AI concerns across creative fields. Signatories included Natalie Portman, Cate Blanchett, Ben Affleck, Guillermo del Toro, Aaron Sorkin, Ava DuVernay, and Taika Waititi, along with members of the Directors Guild of America, SAG-AFTRA, the Writers Guild of America, the Producers Guild of America, and IATSE. The coalition's work is guided by four core pillars: transparency, consent and compensation for content and data; job protection and transition plans; guardrails against misuse and deep fakes; and safeguarding humanity in the creative process.
This coalition represents an attempt to organise creator power across platforms and industries, recognising that individual artists have limited leverage whilst platform-level organisation can shift policy. The Make it Fair Campaign, launched by the UK's creative industries on 25 February, similarly calls on the UK government to support artists and enforce copyright laws through a responsible AI approach.
Can Creative Economies Survive?
The platform migration crisis connects directly to the broader question of market sustainability. If AI-generated images can be produced at near-zero marginal cost, what happens to the market for human-created art?
CISAC projections suggest that by 2028, generative AI outputs in music could approach £17 billion annually, a sizeable share of a global music market Goldman Sachs valued at £105 billion in 2024. With up to 24 per cent of music creators' revenues at risk of being diluted due to AI developments by 2028, the music industry faces a pivotal moment. Visual arts markets face similar pressures.
Creative workers around the world have spoken up about the harms of generative AI on their work, mentioning issues such as damage to their professional reputation, economic losses, plagiarism, copyright issues, and an overall decrease in creative jobs. The economic argument from AI proponents is that generative AI will expand the total market for visual content, creating opportunities even as it disrupts existing business models. The counter-argument from artists is that AI fundamentally devalues human creativity by flooding markets with low-cost alternatives, making it impossible for human artists to compete on price.
Getty Images has compensated hundreds of thousands of artists with “anticipated payments to millions more for the role their content IP has played in training generative technology”. This suggests one path towards market sustainability: embedding artist compensation directly into AI business models. But this only works if AI companies choose to adopt such models or are legally required to do so.
Market sustainability also depends on maintaining the quality and diversity of human-created art. If the most talented artists abandon creative careers because they can't compete economically with AI, the cultural ecosystem degrades. This creates a potential feedback loop: AI models trained predominantly on AI-generated content rather than human-created works may produce increasingly homogenised outputs, reducing the diversity and innovation that makes creative markets valuable.
Some suggest this concern is overblown, pointing to the continued market for artisanal goods in an age of mass manufacturing, or the survival of live music in an age of recorded sound. Human-created art, this argument goes, will retain value precisely because of its human origin, becoming a premium product in a market flooded with AI-generated content. But this presumes consumers can distinguish human from AI art (which C2PA aims to enable) and that enough consumers value that distinction enough to pay premium prices.
What Would Functional Governance Look Like?
More than two years into the generative AI crisis, no comprehensive governance framework has emerged that successfully reconciles AI's creative potential with artists' rights and market sustainability. What exists instead is a patchwork of partial solutions, experimental models, and fragmented regional approaches. But the outlines of what functional governance might look like are becoming clearer.
First, consent mechanisms must shift from opt-out to opt-in as the default. The burden should be on AI companies to obtain permission to use works in training data, not on artists to discover and prevent such use. This reverses the current presumption where anything accessible online is treated as fair game for AI training.
Second, compensation frameworks need to move beyond one-time payments towards revenue-sharing models that scale with the commercial success of AI tools. Getty Images' model demonstrates this is possible within a closed ecosystem. STIM's collective licensing framework shows how it might scale across an industry. But extending these models to cover the full scope of AI training requires either voluntary industry adoption or regulatory mandates that make licensing compulsory.
Third, transparency about training data must become a baseline requirement, not a voluntary disclosure. The EU AI Act's requirement that providers “draw up and make publicly available a sufficiently detailed summary about the content used for training” points in this direction. Artists cannot exercise rights they don't know they have, and markets cannot function when the inputs to AI systems are opaque.
Fourth, attribution and provenance standards like C2PA need widespread adoption to maintain the distinction between human-created and AI-generated content. This serves both consumer protection goals (knowing what you're looking at) and market sustainability goals (allowing human creators to differentiate their work). But adoption must extend beyond a few tech companies to become an industry-wide standard, ideally enforced through regulation.
Fifth, collective rights management infrastructure needs to be built for visual arts, analogous to performance rights organisations in music. Individual artists cannot negotiate effectively with AI companies, and the transaction costs of millions of individual licensing agreements are prohibitive. Collective licensing scales, but it requires institutional infrastructure that currently doesn't exist for most visual arts.
Sixth, platform governance needs to evolve beyond individual platform policies towards industry-wide standards. The current fragmentation, where artists must navigate different policies on different platforms, imposes enormous costs and drives community fracturing. Industry standards or regulatory frameworks that establish baseline protections across platforms would reduce this friction.
Finally, enforcement mechanisms are critical. Voluntary frameworks only work if AI companies choose to participate. The history of internet governance suggests that without enforcement, economic incentives will drive companies towards the least restrictive jurisdictions and practices. This argues for regulatory approaches with meaningful penalties for violations, combined with technical enforcement tools like C2PA that make violations detectable.
None of these elements alone is sufficient. Consent without compensation leaves artists with rights but no income. Compensation without transparency makes verification impossible. Transparency without collective management creates unmanageable transaction costs. But together, they sketch a governance framework that could reconcile competing interests: enabling AI development whilst protecting artist rights and maintaining market sustainability.
The evidence so far suggests that market forces alone will not produce adequate protections. AI companies have strong incentives to train on the largest possible datasets with minimal restrictions, whilst individual artists have limited leverage to enforce their rights. Platform migration shows that artists will vote with their feet when platforms ignore their concerns, but migration to smaller platforms with limited resources isn't a sustainable solution.
The regional divergence between the U.S., EU, and Japan reflects different political economies and different assumptions about the appropriate balance between innovation and rights protection. In a globalised technology market, this divergence creates regulatory arbitrage opportunities that undermine any single jurisdiction's governance attempts.
The litigation underway in the U.S., particularly the Andersen v. Stability AI case, may force legal clarity that voluntary frameworks have failed to provide. If courts find that training AI models on copyrighted works without permission constitutes infringement, licensing becomes legally necessary rather than optional. This could catalyse the development of collective licensing infrastructure and compensation frameworks. But if courts find that such use constitutes fair use, the legal foundation for artist rights collapses, leaving only voluntary industry commitments and platform-level policies.
The governance question posed at the beginning remains open: can AI image generation's creative potential be reconciled with artists' rights and market sustainability? The answer emerging from two years of crisis is provisional: yes, but only if we build institutional frameworks that don't currently exist, establish legal clarity that courts have not yet provided, and demonstrate political will that governments have been reluctant to show. The experimental models, technical standards, and platform migrations documented here are early moves in a governance game whose rules are still being written. What they reveal is that reconciliation is possible, but far from inevitable. The question is whether we'll build the frameworks necessary to achieve it before the damage to creative communities and markets becomes irreversible.
References & Sources
- CIO Playbook: Adobe Generative AI (Firefly) Licensing Models
- Governance of Generative AI in Creative Work: Consent, Credit, Compensation, and Beyond
- Artists' Rights in the Age of Generative AI
- Protecting artists' rights: what responsible AI means for the creative industries
- Meta's New AI Policy Sparks Exodus of Artists
- Creators Coalition on AI
- Hollywood Insiders Unite to Fight for Future of Industry With Launch of Creators Coalition on AI
- C2PA in ChatGPT Images
- How Google and the C2PA are increasing transparency for gen AI content
- The AI lab waging a guerrilla war over exploitative AI
- Nightshade: Protecting Copyright
- Glaze – Protecting Artists from Generative AI
- Spawning lays out plans for letting creators opt out of generative AI training
- Have I Been Trained?
- Spawning opts out 78 million artworks from AI training
- Moving Pictures: NVIDIA, Getty Images Collaborate on Generative AI
- Getty Images promises its new AI doesn't contain copyrighted art
- STIM launches world's first collective AI music licence
- Sweden's STIM has launched the 'world's first AI licence for music'
- GEMA proposes licensing model for AI-generated music
- EU AI Act's Opt-Out Trend May Limit Data Use for Training AI Models
- How to 'opt-out' your images from use in AI training
- Training AI models: UK Government proposes EU style “opt out” copyright exception
- Japan's New Draft Guidelines on AI and Copyright
- Artists Land a Win in Class Action Lawsuit Against A.I. Companies
- What is Cara, the Instagram alternative that gained 600k users in a week?
- Why Artists are Fleeing Instagram for AI-Skeptical App Cara

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk