The Artist Rebellion: Glaze, Lawsuits, and the Fight for Creative Control

It started not with lawyers or legislators, but with a simple question: has my work been trained? In late 2022, when artists began discovering their distinctive styles could be replicated with a few text prompts, the realisation hit like a freight train. Years of painstaking craft, condensed into algorithmic shortcuts. Livelihoods threatened by systems trained on their own creative output, without permission, without compensation, without even a courtesy notification.

What followed wasn't resignation. It was mobilisation.

Today, visual artists are mounting one of the most significant challenges to the AI industry's data practices, deploying an arsenal of technical tools, legal strategies, and market mechanisms that are reshaping how we think about creative ownership in the age of generative models. From data poisoning techniques that corrupt training datasets to blockchain provenance registries that track artwork usage, from class-action lawsuits against billion-dollar AI companies to voluntary licensing marketplaces, the fight is being waged on multiple fronts simultaneously.

The stakes couldn't be higher. AI image generators trained on datasets containing billions of scraped images have fundamentally disrupted visual art markets. Systems like Stable Diffusion, Midjourney, and DALL-E can produce convincing artwork in seconds, often explicitly mimicking the styles of living artists. Christie's controversial “Augmented Intelligence” auction in February 2025, the first major AI art sale at a prestigious auction house, drew over 6,500 signatures on a petition demanding its cancellation. Meanwhile, more than 400 Hollywood insiders published an open letter pushing back against Google and OpenAI's recommendations for copyright exceptions that would facilitate AI training on creative works.

At the heart of the conflict lies a simple injustice: AI models are typically trained on vast datasets scraped from the internet, pulling in copyrighted material without the consent of original creators. The LAION-5B dataset, which contains 5.85 billion image-text pairs and served as the foundation for Stable Diffusion, became a flashpoint. Artists discovered their life's work embedded in these training sets, essentially teaching machines to replicate their distinctive styles and compete with them in the marketplace.

But unlike previous technological disruptions, this time artists aren't simply protesting. They're building defences.

The Technical Arsenal

When Ben Zhao, a professor of computer science at the University of Chicago, watched artists struggling against AI companies using their work without permission, he decided to fight fire with fire. His team's response was Glaze, a defensive tool that adds imperceptible perturbations to images, essentially cloaking them from AI training algorithms.

The concept is deceptively simple yet technically sophisticated. Glaze makes subtle pixel-level changes barely noticeable to human eyes but dramatically confuses machine learning models. Where a human viewer sees an artwork essentially unchanged, an AI model might perceive something entirely different. The example Zhao's team uses is striking: whilst human eyes see a shaded image of a cow in a green field largely unchanged, an AI model trained on that image might instead perceive a large leather purse lying in the grass.

Since launching in March 2023, Glaze has been downloaded more than 7.5 million times, according to 2025 reports. The tool earned recognition as a TIME Best Invention of 2023, won the Chicago Innovation Award, and received the 2023 USENIX Internet Defence Prize. For artists, it represented something rare in the AI age: agency.

But Zhao's team didn't stop at defence. They also built Nightshade, an offensive weapon in the data wars. Whilst Glaze protects individual artists from style mimicry, Nightshade allows artists to collectively disrupt models that scrape their work without consent. By adding specially crafted “poisoned” data to training sets, artists can corrupt AI models, causing them to produce incorrect or nonsensical outputs. Since its release, Nightshade has been downloaded more than 1.6 million times. Shawn Shan, a computer science PhD student who worked on both tools, was named MIT Technology Review Innovator of the Year for 2024.

Yet the arms race continues. By 2025, researchers from the University of Texas at San Antonio, University of Cambridge, and Technical University of Darmstadt had developed LightShed, a method capable of bypassing these protections. In experimental evaluations, LightShed detected Nightshade-protected images with 99.98 per cent accuracy and effectively removed the embedded protections.

The developers of Glaze and Nightshade acknowledged this reality from the beginning. As they stated, “it is always possible for techniques we use today to be overcome by a future algorithm, possibly rendering previously protected art vulnerable.” Like any security measure, these tools engage in an ongoing evolutionary battle rather than offering permanent solutions. Still, Glaze 2.1, released in 2025, includes bugfixes and changes to resist newer attacks.

The broader watermarking landscape has similarly exploded with activity. The first Watermarking Workshop at the International Conference on Learning Representations in 2025 received 61 submissions and 51 accepted papers, a dramatic increase from fewer than 10 watermarking papers submitted just two years earlier.

Major technology companies have also entered the fray. Google developed SynthID through DeepMind, embedding watermarks directly during image generation. OpenAI supports the Coalition for Content Provenance and Authenticity standard, better known as C2PA, which proposes adding encrypted metadata to generated images to enable interoperable provenance verification across platforms.

However, watermarking faces significant limitations. Competition results demonstrated that top teams could remove up to 96 per cent of watermarks, highlighting serious vulnerabilities. Moreover, as researchers noted, “watermarking could eventually be used by artists to opt out of having their work train AI models, but the technique is currently limited by the amount of data required to work properly. An individual artist's work generally lacks the necessary number of data points.”

The European Parliament's analysis concluded that “watermarking implemented in isolation will not be sufficient. It will have to be accompanied by other measures, such as mandatory processes of documentation and transparency for foundation models, pre-release testing, third-party auditing, and human rights impact assessments.”

Whilst technologists built digital defences, lawyers prepared for battle. On 12 January 2023, visual artists Sarah Andersen, Kelly McKernan, and Karla Ortiz filed a landmark class-action lawsuit against Stability AI, Midjourney, and DeviantArt in federal court. The plaintiffs alleged that these companies scraped billions of images from the internet, including their copyrighted works, to train AI platforms without permission or compensation.

Additional artists soon joined, including Hawke Southworth, Grzegorz Rutkowski, Gregory Manchess, Gerald Brom, Jingna Zhang, Julia Kaye, and Adam Ellis. The plaintiffs later amended their complaint to add Runway AI as a defendant.

Then came August 2024, and a watershed moment for artist rights.

US District Judge William Orrick of California ruled that the visual artists could pursue claims that the defendants' image generation systems infringed upon their copyrights. Crucially, Judge Orrick denied Stability AI and Midjourney's motions to dismiss, allowing the case to advance towards discovery, where the inner workings of these AI systems would face unprecedented scrutiny.

In his decision, Judge Orrick found both direct and induced copyright infringement claims plausible. The induced infringement claim against Stability AI proved particularly significant. The plaintiffs argued that by distributing their Stable Diffusion model to other AI providers, Stability AI facilitated the copying of copyrighted material. Judge Orrick noted a damning statement by Stability's CEO, who claimed the company had compressed 100,000 gigabytes of images into a two-gigabyte file that could “recreate” any of those images.

The court also allowed a Lanham Act claim for false endorsement against Midjourney to proceed. Plaintiffs alleged that Midjourney had published their names on a list of artists whose styles its AI product could reproduce and included user-created images incorporating plaintiffs' names on Midjourney's showcase site.

By 2024, the proliferation of generative AI models had spawned well over thirty copyright infringement lawsuits by copyright owners against AI developers. In June 2025, Disney and NBCUniversal escalated the legal warfare, filing a copyright infringement lawsuit against Midjourney, alleging the company used trademarked characters including Elsa, Minions, Darth Vader, and Homer Simpson to train its image model. The involvement of such powerful corporate plaintiffs signalled that artist concerns had gained heavyweight institutional allies.

The legal landscape extended beyond courtroom battles. The Generative AI Copyright Disclosure Act of 2024, introduced in the US Congress on 9 April 2024, proposed requiring companies developing generative AI models to disclose the datasets used to train their systems.

Across the Atlantic, the European Union took a different regulatory approach. The AI Act, which entered into force on 1 August 2024, included specific provisions addressing general purpose AI models. These mandated transparency obligations, particularly regarding technical documentation and content used for training, along with policies to respect EU copyright laws.

Under the AI Act, providers of AI models must comply with the European Union's Copyright Directive No. 790/2019. The Act requires AI service providers to publish summaries of material used for model training. Critically, the AI Act's obligation to respect EU copyright law extends to any operator introducing an AI system into the EU, regardless of which jurisdiction the system was trained in.

However, creative industry groups have expressed concerns that the AI Act doesn't go far enough. In August 2025, fifteen cultural organisations wrote to the European Commission stating: “We firmly believe that authors, performers, and creative workers must have the right to decide whether their works can be used by generative AI, and if they consent, they must be fairly remunerated.” European artists launched a campaign called “Stay True To The Act,” calling on the Commission to ensure AI companies are held accountable.

Market Mechanisms

Whilst lawsuits proceeded through courts and protective tools spread through artist communities, a third front opened: the marketplace itself. If AI companies insisted on training models with creative works, perhaps artists could at least be compensated.

The global dataset licensing for AI training market reached USD 2.1 billion in 2024, with a robust compound annual growth rate of 22.4 per cent projected through the forecast period. The AI datasets and licensing for academic research and publishing market specifically was estimated at USD 381.8 million in 2024, projected to reach USD 1.59 billion by 2030, growing at 26.8 per cent annually.

North America leads this market, accounting for approximately USD 900 million in 2024, driven by the region's concentration of leading technology companies. Europe represents the second-largest regional market at USD 650 million in 2024.

New platforms have risen to facilitate these transactions. Companies like Pip Labs and Vermillio founded AI content-licensing marketplaces that enable content creators to monetise their work via paid AI training access. Some major publishers have struck individual deals. HarperCollins forged an agreement with Microsoft to license non-fiction backlist titles for training AI models, offering authors USD 2,500 per book in exchange for a three-year licensing agreement, though many authors criticised the relatively modest compensation.

Perplexity AI's Publishing Programme, launched in July 2024, takes a different approach, offering revenue share based on the number of a publisher's web pages cited in AI-generated responses to user queries.

Yet fundamental questions persist about whether licensing actually serves artists' interests. The power imbalance between individual artists and trillion-dollar technology companies raises doubts about whether genuinely fair negotiations can occur in these marketplaces.

One organisation attempting to shift these dynamics is Fairly Trained, a non-profit that certifies generative AI companies for training data practices that respect creators' rights. Launched on 17 January 2024 by Ed Newton-Rex, a former vice president of audio at Stability AI who resigned over content scraping concerns, Fairly Trained awards its Licensed Model certification to AI operations that have secured licenses for third-party data used to train their models.

The certification is awarded to any generative AI model that doesn't use any copyrighted work without a license. Certification will not be awarded to models that rely on a “fair use” copyright exception, which indicates that rights-holders haven't given consent.

Fairly Trained launched with nine generative AI companies already certified: Beatoven.AI, Boomy, BRIA AI, Endel, LifeScore, Rightsify, Somms.ai, Soundful, and Tuney. By 2025, Fairly Trained had expanded its certification to include large language models and voice AI. Industry support came from the Association of American Publishers, Association of Independent Music Publishers, Concord, Pro Sound Effects, Universal Music Group, and the Authors Guild.

Newton-Rex explained the philosophy: “Fairly Trained AI certification is focused on consent from training data providers because we believe related improvements for rights-holders flow from consent: fair compensation, credit for inclusion in datasets, and more.”

The Artists Rights Society proposed a complementary approach: voluntary collective licensing wherein copyright owners affirmatively consent to the use of their copyrighted work. This model, similar to how performing rights organisations like ASCAP and BMI handle music licensing, could provide a streamlined mechanism for AI companies to obtain necessary permissions whilst ensuring artists receive compensation.

Provenance Registries and Blockchain

Beyond immediate protections and licensing, artists have embraced technologies that establish permanent, verifiable records of ownership and creation history. Blockchain-based provenance registries represent an attempt to create immutable documentation that survives across platforms.

Since the first NFT was minted in 2014, digital artists and collectors have praised blockchain technology for its usefulness in tracking provenance. The blockchain serves as an immutable digital ledger that records transactions without the aid of galleries or other centralised institutions.

“Minting” a piece of digital art on blockchain documents the date an artwork is made, stores on-chain metadata descriptions, and links to the crypto wallets of both artist and buyer, thus tracking sales history across future transactions. Christie's partnered with Artory, a blockchain-powered fine art registry, which managed registration processes for artworks. Platforms like The Fine Art Ledger use blockchain and NFTs to securely store ownership and authenticity records whilst producing digital certificates of authenticity.

For artists concerned about AI training, blockchain registries offer several advantages. First, they establish definitive proof of creation date and original authorship, critical evidence in potential copyright disputes. Second, they create verifiable records of usage permissions. Third, smart contracts can encode automatic royalty payments, ensuring artists receive compensation whenever their work changes hands or is licensed.

Artists can secure a resale right of 10 per cent that will be paid automatically every time the work changes hands, since this rule can be written into the code of the smart contract. This programmable aspect gives artists ongoing economic interests in their work's circulation, a dramatic shift from traditional art markets where artists typically profit only from initial sales.

However, blockchain provenance systems face significant challenges. The ownership of an NFT as defined by the blockchain has no inherent legal meaning and does not necessarily grant copyright, intellectual property rights, or other legal rights over its associated digital file.

Legal frameworks are slowly catching up. The March 2024 joint report by the US Copyright Office and Patent and Trademark Office on NFTs and intellectual property took a comprehensive look at how copyright, trademark, and patent laws intersect with NFTs. The report did not recommend new legislation, finding that existing IP law is generally capable of handling NFT disputes.

Illegal minting has become a major issue, with people tokenising works against their will. The piracy losses in the NFT industry amount to between USD 1 to 2 billion per year. As of 2025, no NFT-specific legislation exists federally in the US, though general laws can be invoked.

Beyond blockchain, more centralised provenance systems have emerged. Adobe's Content Credentials, based on the C2PA standard, provides cryptographically signed metadata that travels with images across platforms. The system allows creators to attach information about authorship, creation tools, editing history, and critically, their preferences regarding AI training.

Adobe Content Authenticity, released as a public beta in Q1 2025, enables creators to include generative AI training and usage preferences in their Content Credentials. This preference lets creators request that supporting generative AI models not train on or use their work. Content Credentials are available in Adobe Photoshop, Lightroom, Stock, and Premiere Pro.

The “Do Not Train” preference is currently supported by Adobe Firefly and Spawning, though whether other developers will respect these credentials remains uncertain. However, the preference setting makes it explicit that the creator did not want their work used to train AI models, information that could prove valuable in future lawsuits or regulatory enforcement actions.

What's Actually Working

With technical tools, legal strategies, licensing marketplaces, and provenance systems all in play, a critical question emerges: what's actually effective?

The answer is frustratingly complex. No single mechanism has proven sufficient, but combinations show promise, and the mere existence of multiple defensive options has shifted AI companies' behaviour.

On the technical front, Glaze and Nightshade have achieved the most widespread adoption among protection tools, with combined downloads exceeding nine million. Whilst researchers demonstrated vulnerabilities, the tools have forced AI companies to acknowledge artist concerns and, in some cases, adjust practices. The computational cost of bypassing these protections at scale creates friction that matters.

Watermarking faces steeper challenges. The ability of adversarial attacks to remove 96 per cent of watermarks in competition settings demonstrates fundamental weaknesses. Industry observers increasingly view watermarking as one component of multi-layered approaches rather than a standalone solution.

Legally, the August 2024 Andersen ruling represents the most significant victory to date. Allowing copyright infringement claims to proceed towards discovery forces AI companies to disclose training practices, creating transparency that didn't previously exist. The involvement of major corporate plaintiffs like Disney and NBCUniversal in subsequent cases amplifies pressure on AI companies.

Regulatory developments, particularly the EU AI Act, create baseline transparency requirements that didn't exist before. The obligation to disclose training data summaries and respect copyright reservations establishes minimum standards, though enforcement mechanisms remain to be tested.

Licensing marketplaces present mixed results. Established publishers have extracted meaningful payments from AI companies, but individual artists often receive modest compensation. The HarperCollins deal's USD 2,500-per-book payment exemplifies this imbalance.

Fairly Trained certification offers a market-based alternative that shows early promise. By creating reputational incentives for ethical data practices, the certification enables consumers and businesses to support AI systems that respect creator rights. The expanding roster of certified companies demonstrates market demand for ethically trained models.

Provenance systems like blockchain registries and Content Credentials establish valuable documentation but depend on voluntary respect by AI developers. Their greatest value may prove evidentiary, providing clear records of ownership and permissions that strengthen legal cases rather than preventing unauthorised use directly.

The most effective approach emerging from early battles combines multiple mechanisms simultaneously: technical protections like Glaze to raise the cost of unauthorised use, legal pressure through class actions to force transparency, market alternatives through licensing platforms to enable consent-based uses, and provenance systems to document ownership and preferences. This defence-in-depth strategy mirrors cybersecurity principles, where layered defences significantly raise attacker costs and reduce success rates.

Why Independent Artists Struggle to Adopt Protections

Despite the availability of protection mechanisms, independent artists face substantial barriers to adoption.

The most obvious barrier is cost. Whilst some tools like Glaze and Nightshade are free, they require significant computational resources to process images. Artists with large portfolios face substantial electricity costs and processing time. More sophisticated protection services, licensing platforms, and legal consultations carry fees that many independent artists cannot afford.

Technical complexity presents another hurdle. Tools like Glaze require some understanding of how machine learning works. Blockchain platforms demand familiarity with cryptocurrency wallets, gas fees, and smart contracts. Content Credentials require knowledge of metadata standards and platform support. Many artists simply want to create and share their work, not become technologists.

Time investment compounds these challenges. An artist with thousands of existing images across multiple platforms faces an overwhelming task to retroactively protect their catalogue. Processing times for tools like Glaze can be substantial, turning protection into a full-time job when applied to extensive portfolios.

Platform fragmentation creates additional friction. An artist might post work to Instagram, DeviantArt, ArtStation, personal websites, and client platforms. Each has different capabilities for preserving protective measures. Metadata might be stripped during upload. Blockchain certificates might not display properly. Technical protections might degrade through platform compression.

The effectiveness uncertainty further dampens adoption. Artists read about researchers bypassing Glaze, competitions removing watermarks, and AI companies scraping despite “Do Not Train” flags. When protections can be circumvented, the effort to apply them seems questionable.

Legal uncertainty compounds technical doubts. Even with protections applied, artists lack clarity about their legal rights. Will courts uphold copyright claims against AI training? Does fair use protect AI companies? These unanswered questions make it difficult to assess whether protective measures truly reduce risk.

The collective action problem presents perhaps the most fundamental barrier. Individual artists protecting their work provides minimal benefit if millions of other works remain available for scraping. Like herd immunity in epidemiology, effective resistance to unauthorised AI training requires widespread adoption. But individual artists lack incentives to be first movers, especially given the costs and uncertainties involved.

Social and economic precarity intensifies these challenges. Many visual artists work in financially unstable conditions, juggling multiple income streams whilst trying to maintain creative practices. Adding complex technological and legal tasks to already overwhelming workloads proves impractical for many. The artists most vulnerable to AI displacement often have the least capacity to deploy sophisticated protections.

Information asymmetry creates an additional obstacle. AI companies possess vast technical expertise, legal teams, and resources to navigate complex technological and regulatory landscapes. Individual artists typically lack this knowledge base, creating substantial disadvantages.

These barriers fundamentally determine which artists can effectively resist unauthorised AI training and which remain vulnerable. The protection mechanisms available today primarily serve artists with sufficient technical knowledge, financial resources, time availability, and social capital to navigate complex systems.

Incentivising Provenance-Aware Practices

If the barriers to adoption are substantial, how might platforms and collectors incentivise provenance-aware practices that benefit artists?

Platforms hold enormous power to shift norms and practices. They could implement default protections, applying tools like Glaze automatically to uploaded artwork unless artists opt out, inverting the current burden. They could preserve metadata and Content Credentials rather than stripping them during upload processing. They could create prominent badging systems that highlight provenance-verified works, giving them greater visibility in recommendation algorithms.

Economic incentives could flow through platform choices. Verified provenance could unlock premium features, higher placement in search results, or access to exclusive opportunities. Platforms could create marketplace advantages for artists who adopt protective measures, making verification economically rational.

Legal commitments by platforms would strengthen protections substantially. Platforms could contractually commit not to license user-uploaded content for AI training without explicit opt-in consent. They could implement robust takedown procedures for AI-generated works that infringe verified provenance records.

Technical infrastructure investments by platforms could dramatically reduce artist burdens. Computing costs for applying protections could be subsidised or absorbed entirely. Bulk processing tools could protect entire portfolios with single clicks. Cross-platform synchronisation could ensure protections apply consistently.

Educational initiatives could address knowledge gaps. Platforms could provide clear, accessible tutorials on using protective tools, understanding legal rights, and navigating licensing options.

Collectors and galleries likewise can incentivise provenance practices. Premium pricing for provenance-verified works signals market value for documented authenticity and ethical practices. Collectors building reputations around ethically sourced collections create demand-side pull for proper documentation. Galleries could require provenance verification as a condition of representation.

Resale royalty enforcement through smart contracts gives artists ongoing economic interests in their work's circulation. Collectors who voluntarily honour these arrangements, even when not legally required, demonstrate commitment to sustainable creative economies.

Provenance-focused exhibitions and collections create cultural cachet around verified works. When major museums and galleries highlight blockchain-verified provenance or Content Credentials in their materials, they signal that professional legitimacy increasingly requires robust documentation.

Philanthropic and institutional support could subsidise protection costs for artists who cannot afford them. Foundations could fund free access to premium protective services. Arts organisations could provide technical assistance. Grant programmes could explicitly reward provenance-aware practices.

Industry standards and collective action amplify individual efforts. Professional associations could establish best practices that members commit to upholding. Cross-platform alliances could create unified approaches to metadata preservation and “Do Not Train” flags, reducing fragmentation. Collective licensing organisations could streamline permissions whilst ensuring compensation.

Government regulation could mandate certain practices. Requirements that platforms preserve metadata and Content Credentials would eliminate current stripping practices. Opt-in requirements for AI training, as emerging in EU regulation, shift default assumptions about consent. Disclosure requirements for training datasets enable artists to discover unauthorised use.

The most promising approaches combine multiple incentive types simultaneously. A platform that implements default protections, preserves metadata, provides economic advantages for verified works, subsidises computational costs, offers accessible education, and commits contractually to respecting artist preferences creates a comprehensively supportive environment.

Similarly, an art market ecosystem where collectors pay premiums for verified provenance, galleries require documentation for representation, museums highlight ethical sourcing, foundations subsidise protection costs, professional associations establish standards, and regulations mandate baseline practices would make provenance-aware approaches the norm rather than the exception.

An Unsettled Future

The battle over AI training on visual art remains fundamentally unresolved. Legal cases continue through courts without final judgments. Technical tools evolve in ongoing arms races with circumvention methods. Regulatory frameworks take shape but face implementation challenges. Market mechanisms develop but struggle with power imbalances.

What has changed is the end of the initial free-for-all period when AI companies could scrape with impunity, face no organised resistance, and operate without transparency requirements. Artists mobilised, built tools, filed lawsuits, demanded regulations, and created alternative economic models. The costs of unauthorised use, both legal and reputational, increased substantially.

The effectiveness of current mechanisms remains limited when deployed individually, but combinations show promise. The mere existence of resistance shifted some AI company behaviour, with certain developers now seeking licenses, supporting provenance standards, or training only on permissioned datasets. Fairly Trained's growing roster demonstrates market demand for ethically sourced AI.

Yet fundamental challenges persist. Power asymmetries between artists and technology companies remain vast. Technical protections face circumvention. Legal frameworks develop slowly whilst technology advances rapidly. Economic models struggle to provide fair compensation at scale. Independent artists face barriers that exclude many from available protections.

The path forward likely involves continued evolution across all fronts. Technical tools will improve whilst facing new attacks. Legal precedents will gradually clarify applicable standards. Regulations will impose transparency and consent requirements. Markets will develop more sophisticated licensing and compensation mechanisms. Provenance systems will become more widely adopted as cultural norms shift.

But none of this is inevitable. It requires sustained pressure from artists, support from platforms and collectors, sympathetic legal interpretations, effective regulation, and continued technical innovation. The mobilisation that began in 2022 must persist and adapt.

What's certain is that visual artists are no longer passive victims of technological change. They're fighting back with ingenuity, determination, and an expanding toolkit. Whether that proves sufficient to protect creative livelihoods and ensure fair compensation remains to be seen. But the battle lines are drawn, the mechanisms are deployed, and the outcome will shape not just visual art, but how we conceive of creative ownership in the algorithmic age.

The question posed at the beginning was simple: has my work been trained? The response from artists is now equally clear: not without a fight.


References and Sources

Artists Rights Society. (2024-2025). AI Updates. https://arsny.com/ai-updates/

Artnet News. (2024). 4 Ways A.I. Impacted the Art Industry in 2024. https://news.artnet.com/art-world/a-i-art-industry-2024-2591678

Arts Law Centre of Australia. (2024). Glaze and Nightshade: How artists are taking arms against AI scraping. https://www.artslaw.com.au/glaze-and-nightshade-how-artists-are-taking-arms-against-ai-scraping/

Authors Guild. (2024). Authors Guild Supports New Fairly Trained Licensing Model to Ensure Consent in Generative AI Training. https://authorsguild.org/news/ag-supports-fairly-trained-ai-licensing-model/

Brookings Institution. (2024). AI and the visual arts: The case for copyright protection. https://www.brookings.edu/articles/ai-and-the-visual-arts-the-case-for-copyright-protection/

Bruegel. (2025). The European Union is still caught in an AI copyright bind. https://www.bruegel.org/analysis/european-union-still-caught-ai-copyright-bind

Center for Art Law. (2024). AI and Artists' IP: Exploring Copyright Infringement Allegations in Andersen v. Stability AI Ltd. https://itsartlaw.org/art-law/artificial-intelligence-and-artists-intellectual-property-unpacking-copyright-infringement-allegations-in-andersen-v-stability-ai-ltd/

Copyright Alliance. (2024). AI Lawsuit Developments in 2024: A Year in Review. https://copyrightalliance.org/ai-lawsuit-developments-2024-review/

Digital Content Next. (2025). AI content licensing lessons from Factiva and TIME. https://digitalcontentnext.org/blog/2025/03/06/ai-content-licensing-lessons-from-factiva-and-time/

Euronews. (2025). EU AI Act doesn't do enough to protect artists' copyright, groups say. https://www.euronews.com/next/2025/08/02/eus-ai-act-doesnt-do-enough-to-protect-artists-copyright-creative-groups-say

European Copyright Society. (2025). Copyright and Generative AI: Opinion of the European Copyright Society. https://europeancopyrightsociety.org/wp-content/uploads/2025/02/ecs_opinion_genai_january2025.pdf

European Commission. (2024). AI Act | Shaping Europe's digital future. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

Fairly Trained. (2024). Fairly Trained launches certification for generative AI models that respect creators' rights. https://www.fairlytrained.org/blog/fairly-trained-launches-certification-for-generative-ai-models-that-respect-creators-rights

Gemini. (2024). NFT Art on the Blockchain: Art Provenance. https://www.gemini.com/cryptopedia/fine-art-on-the-blockchain-nft-crypto

Glaze. (2023-2024). Glaze: Protecting Artists from Generative AI. https://glaze.cs.uchicago.edu/

Hollywood Reporter. (2024). AI Companies Take Hit as Judge Says Artists Have “Public Interest” In Pursuing Lawsuits. https://www.hollywoodreporter.com/business/business-news/artist-lawsuit-ai-midjourney-art-1235821096/

Hugging Face. (2025). Highlights from the First ICLR 2025 Watermarking Workshop. https://huggingface.co/blog/hadyelsahar/watermarking-iclr2025

IEEE Spectrum. (2024). With AI Watermarking, Creators Strike Back. https://spectrum.ieee.org/watermark-ai

IFPI. (2025). European artists unite in powerful campaign urging policymakers to 'Stay True To the [AI] Act'. https://www.ifpi.org/european-artists-unite-in-powerful-campaign-urging-policymakers-to-stay-true-to-the-ai-act/

JIPEL. (2024). Andersen v. Stability AI: The Landmark Case Unpacking the Copyright Risks of AI Image Generators. https://jipel.law.nyu.edu/andersen-v-stability-ai-the-landmark-case-unpacking-the-copyright-risks-of-ai-image-generators/

MIT Technology Review. (2023). This new data poisoning tool lets artists fight back against generative AI. https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/

MIT Technology Review. (2024). The AI lab waging a guerrilla war over exploitative AI. https://www.technologyreview.com/2024/11/13/1106837/ai-data-posioning-nightshade-glaze-art-university-of-chicago-exploitation/

Monda. (2024). Ultimate List of Data Licensing Deals for AI. https://www.monda.ai/blog/ultimate-list-of-data-licensing-deals-for-ai

Nightshade. (2023-2024). Nightshade: Protecting Copyright. https://nightshade.cs.uchicago.edu/whatis.html

Tech Policy Press. (2024). AI Training, the Licensing Mirage, and Effective Alternatives to Support Creative Workers. https://www.techpolicy.press/ai-training-the-licensing-mirage-and-effective-alternatives-to-support-creative-workers/

The Fine Art Ledger. (2024). Mastering Art Provenance: How Blockchain and Digital Registries Can Future-Proof Your Fine Art Collection. https://www.thefineartledger.com/post/mastering-art-provenance-how-blockchain-and-digital-registries

The Register. (2024). Non-profit certifies AI models that license scraped data. https://www.theregister.com/2024/01/19/fairly_trained_ai_certification_scheme/

University of Chicago Maroon. (2024). Guardians of Creativity: Glaze and Nightshade Forge New Frontiers in AI Defence for Artists. https://chicagomaroon.com/42054/news/guardians-of-creativity-glaze-and-nightshade-forge-new-frontiers-in-ai-defense-for-artists/

University of Southern California IP & Technology Law Society. (2025). AI, Copyright, and the Law: The Ongoing Battle Over Intellectual Property Rights. https://sites.usc.edu/iptls/2025/02/04/ai-copyright-and-the-law-the-ongoing-battle-over-intellectual-property-rights/

UTSA Today. (2025). Researchers show AI art protection tools still leave creators at risk. https://www.utsa.edu/today/2025/06/story/AI-art-protection-tools-still-leave-creators-at-risk.html

Adobe. (2024-2025). Learn about Content Credentials in Photoshop. https://helpx.adobe.com/photoshop/using/content-credentials.html

Adobe. (2024). Media Alert: Adobe Introduces Adobe Content Authenticity Web App to Champion Creator Protection and Attribution. https://news.adobe.com/news/2024/10/aca-announcement


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...