Human in the Loop

NoToAI

When Jason Allen submitted “Théâtre D'opéra Spatial” to the Colorado State Fair's digital art competition in August 2022, he wasn't anticipating a cultural reckoning. The piece, a sprawling, operatic vision of robed figures in a cosmic cathedral, won first prize in the “Digital Arts / Digitally-Manipulated Photography” category. Allen collected his $300 prize and blue ribbon, satisfied that he'd made his point.

Then the internet found out he'd created it using Midjourney, an artificial intelligence text-to-image generator.

“We're watching the death of artistry unfold right before our eyes,” one person wrote on Twitter. Another declared it “so gross.” Within days, Allen's win had sparked a furious debate that continues to reverberate through creative communities worldwide. The controversy wasn't simply about whether AI-generated images constitute “real art”: it was about what happens when algorithmic tools trained on billions of scraped images enter the communal spaces where human creativity has traditionally flourished.

“I won, and I didn't break any rules,” Allen told The New York Times in September 2022, defending his submission. But the backlash suggested that something more profound than rule-breaking was at stake. What Allen had inadvertently revealed was a deepening fracture in how we understand creative labour, artistic ownership, and the future of collaborative cultural production.

More than two years later, that fracture has widened into a chasm. Generative AI tools (systems like Stable Diffusion, Midjourney, DALL-E 2, and their proliferating descendants) have moved from experimental novelty to ubiquitous presence. They've infiltrated makerspaces, artist collectives, community art programmes, and local cultural institutions. And in doing so, they've forced an urgent reckoning with fundamental questions: Who owns creativity when machines can generate it? What happens to communal artistic practice when anyone with a text prompt can produce gallery-worthy images in seconds? And can local cultural production survive when the tools transforming it are trained on the uncompensated labour of millions of artists?

The Technical Reality

To understand generative AI's impact on community creativity, one must first grasp how these systems actually work, and why that mechanism matters immensely to working artists.

Text-to-image AI generators like Stable Diffusion and Midjourney are built through a process called “diffusion,” which trains neural networks on enormous datasets of images paired with text descriptions. Stable Diffusion, released publicly by Stability AI in August 2022, was trained on a subset of the LAION-5B dataset: a collection of 5.85 billion image-text pairs scraped from across the internet.

The training process is technically sophisticated but conceptually straightforward: the AI analyses millions of images, learning to recognise patterns, styles, compositional techniques, and visual relationships. When a user types a prompt like “Victorian street scene at dusk, oil painting style,” the system generates an image by reversing a noise-adding process, gradually constructing visual information that matches the learned patterns associated with those descriptive terms.

Crucially, these models don't store actual copies of training images. Instead, they encode mathematical representations of visual patterns gleaned from those images. This technical distinction lies at the heart of ongoing legal battles over copyright infringement, a distinction that many artists find unconvincing.

“This thing wants our jobs, it's actively anti-artist,” digital artist RJ Palmer wrote in August 2022, articulating what thousands of creative professionals were feeling. The concern wasn't abstract: AI image generators could demonstrably replicate the distinctive styles of specific living artists, sometimes with unsettling accuracy.

When Stability AI announced Stable Diffusion's public release in August 2022, company founder Emad Mostaque described it as trained on “100,000GB of images” gathered from the web. The model's capabilities were immediately stunning and immediately controversial. Artists discovered their work had been incorporated into training datasets without consent, knowledge, or compensation. Some found that typing their own names into these generators produced images mimicking their signature styles, as if decades of artistic development had been compressed into a prompt-accessible aesthetic filter.

The artistic community's response escalated from online outrage to coordinated legal action with remarkable speed. On 13 January 2023, three artists (Sarah Andersen, Kelly McKernan, and Karla Ortiz) filed a class-action lawsuit against Stability AI, Midjourney, and DeviantArt, alleging copyright infringement on a massive scale.

The lawsuit, filed by lawyer Matthew Butterick and the Joseph Saveri Law Firm, claims these companies “infringed the rights of millions of artists” by training AI systems on billions of images “without the consent of the original artists.” The complaint characterises AI image generators as sophisticated collage tools that “store compressed copies of training images” and then “recombine” them, a technical characterisation that experts have disputed but which captures the plaintiffs' fundamental grievance.

“This isn't just about three artists,” Butterick wrote in announcing the suit. “It's about whether AI development will honour the rights of creators or steamroll them.”

Getty Images escalated the conflict further, filing suit against Stability AI in London's High Court in January 2023. The stock photo agency alleged that Stability AI “unlawfully copied and processed millions of images protected by copyright... to the detriment of the content creators.” Getty CEO Craig Peters told the BBC the company believed “content owners should have a say in how their work is used,” framing the lawsuit as defending photographers' and illustrators' livelihoods.

These legal battles have forced courts to grapple with applying decades-old copyright law to technologies that didn't exist when those statutes were written. In the United States, the question hinges largely on whether training AI models on copyrighted images constitutes “fair use”: a doctrine that permits limited use of copyrighted material without permission for purposes like criticism, commentary, or research.

“For hundreds of years, human artists learned by copying the art of their predecessors,” noted Patrick Goold, a reader in law at City, University of London, when commenting on the lawsuits to the BBC. “Furthermore, at no point in history has the law sanctioned artists for copying merely an artistic style. The question before the US courts today is whether to abandon these long-held principles in relation to AI-generated images.”

That question remains unresolved as of October 2025, with lawsuits proceeding through courts on both sides of the Atlantic. The outcomes will profoundly shape how generative AI intersects with creative communities, determining whether these tools represent legal innovation or industrial-scale infringement.

The Cultural Institutions Respond

While legal battles unfold, cultural institutions have begun tentatively exploring how generative AI might fit within their missions to support and showcase artistic practice. The results have been mixed, revealing deep tensions within the art world about algorithmic creativity's legitimacy and value.

The Museum of Modern Art in New York has integrated AI-generated works into its programming, though with careful contextualisation. In September 2025, MoMA debuted “Sasha Stiles: A LIVING POEM” in its galleries, a generative language system that combines Stiles' original poetry, fragments from MoMA's text-art collection, p5.js code, and GPT-4 to create evolving poetic works. The installation, which incorporates music by Kris Bone, represents MoMA's measured approach to AI art: highlighting works where human creativity shapes and directs algorithmic processes, rather than simply prompt-based image generation.

Other institutions have been more cautious. Many galleries and museums have declined to exhibit AI-generated works, citing concerns about authenticity, artistic intentionality, and the ethical implications of systems trained on potentially pirated material. The hesitancy reflects broader uncertainty about how to evaluate AI-generated work within traditional curatorial frameworks developed for human-created art.

“We're still working out what questions to ask,” one curator at a major metropolitan museum told colleagues privately, speaking on condition of anonymity. “How do we assess aesthetic merit when the 'artist' is partly a system trained on millions of other people's images? What does artistic voice mean in that context? These aren't just technical questions; they're philosophical ones about what art fundamentally is.”

Cultural institutions that support community-based art-making have faced even thornier dilemmas. Organisations receiving public funding from bodies like the National Endowment for the Arts or the Knight Foundation must navigate tensions between supporting artistic innovation and ensuring their grants don't inadvertently undermine the livelihoods of the artists they exist to serve.

The Knight Foundation, which has invested hundreds of millions in arts and culture across American communities since 1950, has largely steered clear of funding AI-focused art projects as of 2025, instead continuing to emphasise support for human artists, cultural spaces, and traditional creative practices. Similarly, the NEA has maintained its focus on supporting individual artists and nonprofit organisations engaged in human-led creative work, though the agency continues researching AI's impacts on creative industries.

Some community arts organisations have attempted to stake out middle ground positions. Creative Capital, a New York-based nonprofit that has supported innovative artists with funding and professional development since 1999, has neither embraced nor rejected AI tools outright. Instead, the organisation continues evaluating proposals based on artistic merit and the artist's creative vision, regardless of whether that vision incorporates algorithmic elements. This pragmatic approach reflects the complexity facing arts funders: how to remain open to genuine innovation whilst not inadvertently accelerating the displacement of human creative labour that such organisations exist to support.

The Grassroots Resistance

While institutions have proceeded cautiously, working artists (particularly those in illustration, concept art, and digital creative fields) have mounted increasingly organised resistance to generative AI's encroachment on their professional territories.

ArtStation, a popular online portfolio platform used by digital artists worldwide, became a flashpoint in late 2022 when it launched “DreamUp,” its own AI image generation tool. The backlash was swift and furious. Artists flooded the platform with images protesting AI-generated art, many featuring variations of “No AI Art” or “#NoToAI” slogans. Some began watermarking their portfolios with anti-AI messages. Others left the platform entirely.

The protests revealed a community in crisis. For many digital artists, ArtStation represented more than just a portfolio hosting service. It was a professional commons, a place where illustrators, concept artists, and digital painters could showcase their work, connect with potential clients, and participate in a community of practice. The platform's decision to introduce an AI generator felt like a betrayal, transforming a space dedicated to celebrating human creativity into one that potentially undermined it.

“We're being put out of work by machines trained on our own labour,” one illustrator posted during the ArtStation protests. “It's not innovation. It's theft with extra steps.”

The protest movement extended beyond online platforms. Artists organised petition drives, wrote open letters to AI companies, and sought media attention to publicise their concerns. Some formed collectives specifically to resist AI encroachment on creative labour, sharing information about which clients were replacing human artists with AI generation and coordinating collective responses to industry developments.

These efforts faced significant challenges. Unlike traditional labour organising, where workers can withhold their labour as leverage, visual artists working in dispersed, freelance arrangements had limited collective power. They couldn't strike against AI companies who had already scraped their work. They couldn't picket internet platforms that hosted training datasets. The infrastructure enabling generative AI operated at scales and through mechanisms that traditional protest tactics struggled to address.

Beyond protest, some artists and technologists attempted to create alternative systems that might address the consent and compensation issues plaguing existing AI tools. In 2022, musicians Holly Herndon and Mat Dryhurst, both pioneers in experimental electronic music and AI-assisted composition, helped launch Spawning AI and its associated tools “Have I Been Trained?” and “Source.Plus.” These platforms aimed to give artists more control over whether their work could be used in AI training datasets.

Herndon and Dryhurst brought unique perspectives to the challenge. Both had experimented extensively with AI in their own creative practices, using machine learning systems to analyse and generate musical compositions. They understood the creative potential of these technologies whilst remaining acutely aware of their implications for artistic labour and autonomy. Their initiatives represented an attempt to chart a middle path: acknowledging AI's capabilities whilst insisting on artists' right to consent and control.

The “Have I Been Trained?” tool allowed artists to search the LAION dataset to see if their work had been included in the training data for Stable Diffusion and other models. For many artists, using the tool was a sobering experience, revealing that hundreds or thousands of their images had been scraped and incorporated into systems they hadn't consented to and from which they received no compensation.

However, these opt-out tools faced inherent limitations. By the time they launched, most major AI models had already been trained: the datasets compiled, the patterns extracted, the knowledge embedded in billions of neural network parameters. Allowing artists to remove future works from future datasets couldn't undo the training that had already occurred. It was, critics noted, rather like offering to lock the stable door after the algorithmic horses had bolted.

Moreover, the opt-out approach placed the burden on individual artists to police the use of their work across the vast, distributed systems of the internet. For working artists already stretched thin by professional demands, adding dataset monitoring to their responsibilities was often impractical. The asymmetry was stark: AI companies could scrape and process billions of images with automated systems, whilst artists had to manually search databases and submit individual opt-out requests.

As of October 2025, the Spawning AI platforms remain under maintenance, their websites displaying messages about “hacking the mainframe”, a perhaps unintentionally apt metaphor for the difficulty of imposing human control on systems already unleashed into the wild. The challenges Herndon and Dryhurst encountered illustrate a broader problem: technological solutions to consent and compensation require cooperation from the AI companies whose business models depend on unrestricted access to training data. Without regulatory requirements or legal obligations, such cooperation remains voluntary and therefore uncertain.

The Transformation of Collaborative Practice

Here's what's getting lost in the noise about copyright and compensation: generative AI isn't just changing how individual artists work. It's rewiring the fundamental dynamics of how communities create art together.

Traditional community art-making runs on shared human labour, skill exchange, and collective decision-making. You bring the painting skills, I'll handle sculpture, someone else offers design ideas. The creative process itself becomes the community builder. Diego Rivera's collaborative murals. The community arts movement of the 1960s and 70s. In every case, the value wasn't just the finished artwork. It was the process. Working together. Creating something that embodied shared values.

Now watch what generative AI does to that equation.

Anyone with a text prompt can generate intricate illustrations. A community group planning a mural no longer needs to recruit a painter. They can generate design options and select preferred directions entirely through algorithmic means.

Yes, this democratises visual expression. Disability activists have noted that AI generation tools enable creative participation for people whose physical abilities might limit traditional art-making. New forms of access.

But here's the problem: this “democratisation” potentially undermines the collaborative necessity that has historically brought diverse community members together around shared creative projects. If each person can generate their own complete visions independently, what incentive exists to engage in the messy, time-consuming work of collaborative creation? What happens when the artistic process becomes solitary prompt-crafting rather than collective creation?

Consider a typical community mural project before generative AI. Professional artists, local residents, young people, elders, all brought together. Early stages involved conversations. What should the mural represent? What stories should it tell? What aesthetic traditions should it draw upon? These conversations themselves built understanding across differences. Participants shared perspectives. Negotiated competing visions.

The actual painting process provided further opportunities for collaboration and skill-sharing. Experienced artists mentoring newcomers. Residents learning techniques. Everyone contributing labour to the project's realisation.

When algorithmic tools enter this space, they risk transforming genuine collaboration into consultation exercises. Community members provide input (in the form of prompts or aesthetic preferences) that professionals then render into finished works using AI generators. The distinction might seem subtle. But it fundamentally alters the social dynamics and community-building functions of collaborative art-making. Instead of hands-on collaborative creation, participants review AI-generated options and vote on preferences. That's closer to market research than creative collaboration.

This shift carries particular weight for how community art projects create local ownership and investment. When residents physically paint a community mural, their labour is literally embedded in the work. They've spent hours or days creating something tangible that represents their community. Deep personal and collective investment in the finished piece. An AI-generated mural, regardless of how carefully community input shaped the prompts, lacks this dimension of embodied labour and direct creative participation.

Some organisations are attempting to integrate AI tools whilst preserving collaborative human creativity. One strategy: using AI generation during early conceptual phases whilst maintaining human creative labour for final execution. Generate dozens of AI images to explore compositional approaches. Use these outputs as springboards for discussion. But ultimately create the final mural through traditional collaborative painting.

Musicians Holly Herndon and Mat Dryhurst have explored similar territory. Their Holly+ project, launched in 2021, created a digital instrument trained on Herndon's voice that other artists could use with permission. The approach deliberately centred collaboration and consent, demonstrating how AI tools might augment rather than replace human creative partnership.

These examples suggest possible paths forward. But they face constant economic pressure. As AI-generated content becomes cheaper and faster, institutions operating under tight budgets face strong incentives to rely more heavily on algorithmic generation. The risk? A gradual hollowing out of community creative practice. Social and relationship-building dimensions sacrificed for efficiency and cost savings.

The Environmental and Ethical Shadows

Beyond questions of copyright, consent, and creative labour lie deeper concerns about generative AI's environmental costs and ethical implications: issues with particular resonance for communities thinking about sustainable cultural production.

Training large AI models requires enormous computational resources, consuming vast amounts of electricity and generating substantial carbon emissions. While precise figures for specific models remain difficult to verify, researchers have documented that training a single large language model can emit as much carbon as several cars over their entire lifetimes. Image generation models require similar computational intensity.

For communities and institutions committed to environmental sustainability (a growing priority in arts and culture sectors), the carbon footprint of AI-generated art raises uncomfortable questions. Does creating images through energy-intensive computational processes align with values of environmental responsibility? How do we weigh the creative possibilities of AI against its environmental impacts?

These concerns intersect with broader ethical questions about how AI systems encode and reproduce social biases. Models trained on internet-scraped data inevitably absorb and can amplify the biases, stereotypes, and problematic representations present in their training material. Early versions of AI image generators notoriously struggled with accurately and respectfully representing diverse human faces, body types, and cultural contexts, producing results that ranged from awkwardly homogenised to explicitly offensive.

While newer models have improved in this regard through better training data and targeted interventions, the fundamental challenge remains: AI systems trained predominantly on Western, English-language internet content tend to encode Western aesthetic norms and cultural perspectives as default. For communities using these tools to create culturally specific artwork or represent local identity and history, this bias presents serious limitations.

Moreover, the concentration of AI development in a handful of well-resourced technology companies raises questions about cultural autonomy and self-determination. When the algorithmic tools shaping visual culture are created by companies in Silicon Valley, what happens to local and regional creative traditions? How do communities preserve distinctive aesthetic practices when powerful, convenient tools push toward algorithmically optimised homogeneity?

The Uncertain Future

As of October 2025, generative AI's impact on community creativity, collaborative art, and local cultural production remains contested and in flux. Different scenarios seem possible, depending on how ongoing legal battles, technological developments, and cultural negotiations unfold.

In one possible future, legal and regulatory frameworks evolve to establish clearer boundaries around AI training data and generated content. Artists gain meaningful control over whether their work can be used in training datasets. AI companies implement transparent, opt-in consent mechanisms and develop compensation systems for creators whose work trains their models. Generative AI becomes one tool among many in creative communities' toolkits: useful for specific applications but not displacing human creativity or collaborative practice.

This optimistic scenario assumes substantial changes in how AI development currently operates: changes that powerful technology companies have strong financial incentives to resist. It also requires legal victories for artists in ongoing copyright cases, outcomes that remain far from certain given the complexities of applying existing law to novel technologies.

A grimmer possibility sees current trajectories continue unchecked. AI-generated content proliferates, further depressing already precarious creative economies. Community art programmes increasingly rely on algorithmic generation to save costs, eroding the collaborative and relationship-building functions of collective creativity. The economic incentives toward efficiency overwhelm cultural commitments to human creative labour, whilst legal frameworks fail to establish meaningful protections or compensation mechanisms.

A third possibility (neither wholly optimistic nor entirely pessimistic) envisions creative communities developing hybrid practices that thoughtfully integrate AI tools while preserving essential human elements. In this scenario, artists and communities establish their own principles for when and how to use generative AI. Some creative contexts explicitly exclude algorithmic generation, maintaining spaces for purely human creativity. Others incorporate AI tools strategically, using them to augment rather than replace human creative labour. Communities develop literacies around algorithmic systems, understanding both their capabilities and limitations.

This hybrid future requires cultural institutions, funding bodies, and communities themselves to actively shape how AI tools integrate into creative practice, rather than passively accepting whatever technology companies offer. It means developing ethical frameworks, establishing community standards, and being willing to reject conveniences that undermine fundamental creative values.

What seems certain is that generative AI will not simply disappear. The technologies exist, the models have been released, and the capabilities they offer are too powerful for some actors to ignore. The question facing creative communities isn't whether AI image generation will be part of the cultural landscape; it already is. The question is whether communities can assert enough agency to ensure these tools serve rather than supplant human creativity, collaboration, and cultural expression.

The Economic Restructuring of Creative Work

Underlying all these tensions is a fundamental economic restructuring of creative labour, one with particular consequences for community arts practice and local cultural production.

Before generative AI, the economics of visual art creation established certain boundaries and relationships. Creating images required time, skill, and effort. This created economic value that could sustain professional artists, whilst also creating spaces where collaborative creation made economic sense.

Commissioning custom artwork cost money, incentivising businesses and institutions to carefully consider what they truly needed and to value the results. The economic friction of creative production shaped not just industries but cultural practices and community relationships.

Generative AI collapses much of this economic structure. The marginal cost of producing an additional AI-generated image approaches zero: just the computational expense of a few seconds of processing time. This economic transformation ripples through creative communities in complex ways.

For commercial creative work, the effects have been swift and severe. Businesses that once hired illustrators for marketing materials, product visualisations, or editorial content increasingly generate images in-house using AI tools. The work still happens, but it shifts from paid creative labour to unpaid tasks added to existing employees' responsibilities. A marketing manager who once commissioned illustrations now spends an hour crafting prompts and selecting outputs. The images get made, but the economic value that previously flowed to artists vanishes.

This matters immensely for community creative capacity. Many professional artists have historically supplemented income from commercial work with community arts practice: teaching classes, facilitating workshops, leading public art projects. As commercial income shrinks, artists must choose between reducing community engagement to pursue other income sources or accepting reduced overall earnings. Either way, communities lose experienced creative practitioners who once formed the backbone of local arts infrastructure.

The economics also reshape what kinds of creative projects seem viable. When image creation is essentially free, the calculus around community art initiatives changes. A community organisation planning a fundraising campaign might once have allocated budget for commissioned artwork, hiring a local artist and building economic relationships within the community. Now they can generate imagery for free, keeping those funds for other purposes. Individually rational economic decisions accumulate into a systematic withdrawal of resources from community creative labour.

Yet the economic transformation isn't entirely one-directional. Some artists have repositioned themselves as creative directors rather than purely executors, offering vision, curation, and aesthetic judgement that AI tools cannot replicate. Whether this adaptation can sustain viable creative careers at scale, or merely benefits a fortunate few whilst the majority face displacement, remains an open question.

Reclaiming the Commons

At its core, the generative AI disruption of community creativity is a story about power, labour, and cultural commons. It's about who controls the tools and data shaping visual culture. It's about whether creative labour will be valued and compensated or strip-mined to train systems that then undercut the artists who provided that labour. It's about whether local communities can maintain distinctive cultural practices or whether algorithmic optimisation pushes everything toward a bland, homogenised aesthetic centre.

These aren't new questions. Every significant technological shift in creative production (from photography to digital editing software) has provoked similar anxieties about artistic authenticity, labour displacement, and cultural change. In each previous case, creative communities eventually adapted, finding ways to incorporate new tools whilst preserving what they valued in established practices.

Photography didn't destroy painting, though 19th-century painters feared it would. Digital tools didn't eliminate hand-drawn illustration, though they transformed how illustration was practiced and distributed. In each case, creative communities negotiated relationships with new technologies, establishing norms, developing new hybrid practices, and finding ways to preserve what they valued whilst engaging with new capabilities.

But generative AI represents a transformation of different character and scale. Previous creative technologies augmented human capabilities or changed how human creativity was captured and distributed. A camera didn't paint portraits; it captured reality through a lens that required human judgement about composition, lighting, timing, and subject. Photoshop didn't draw illustrations; it provided tools for human artists to manipulate digital imagery with greater flexibility and power.

Generative AI, by contrast, claims to replace significant aspects of human creative labour entirely, producing outputs that are often indistinguishable from human-made work, trained on that work without consent or compensation. The technology doesn't merely augment human creativity; it aspires to automate it, substituting algorithmic pattern-matching for human creative vision and labour.

This distinction matters because it shapes what adaptation looks like. Creative communities can't simply treat generative AI as another tool in the toolkit, because the technology's fundamental operation (replacing human creative labour with computational processing) cuts against core values of creative practice and community arts development. The challenge isn't just learning to use new tools; it's determining whether and how those tools can coexist with sustainable creative communities and valued cultural practices.

Some paths forward are emerging. Some artists and communities are establishing “AI-free” zones and practices, explicitly rejecting algorithmic generation in favour of purely human creativity. These spaces might be seen as resistance or preservation efforts, maintaining alternatives to algorithmically-dominated creative production. Whether they can sustain themselves economically whilst competing with free or cheap AI-generated alternatives remains uncertain.

Other communities are attempting to develop ethical frameworks for AI use: principles that govern when algorithmic generation is acceptable and when it isn't. These frameworks typically distinguish between using AI as a tool within human-directed creative processes versus allowing it to replace human creative labour entirely. Implementation challenges abound, particularly around enforcement and the slippery slope from limited to extensive AI reliance.

This isn't mere technological evolution. It's a fundamental challenge to creative labour's value and creative communities' autonomy. Whether artists, communities, and cultural institutions can meet that challenge (can reassert control over how algorithmic tools enter creative spaces and what values govern their use) will determine whether the future of community creativity is one of genuine flourishing or gradual hollowing out.

The stakes extend beyond creative communities themselves. Arts and culture function as crucial elements of civic life, building social connection, facilitating expression, processing collective experiences, and creating shared meaning. If generative AI undermines the sustainable practice of community creativity, the losses will extend far beyond artists' livelihoods, affecting the social fabric and cultural health of communities themselves.

The algorithmic genie is out of the bottle. The question is whether it will serve the commons or consume it. That answer depends not on technology alone but on choices communities, institutions, and societies make about what they value, what they're willing to fight for, and what kind of creative future they want to build.


Sources and References

Allen, Jason M. (2022). Multiple posts in Midjourney Discord server regarding Colorado State Fair win. Discord. August-September 2022. https://discord.com/channels/662267976984297473/993481462068301905/1012597813357592628

Andersen, Sarah, Kelly McKernan, and Karla Ortiz v. Stability AI, Midjourney, and DeviantArt. (2023). Class Action Complaint. United States District Court, Northern District of California. Case filed 13 January 2023. https://stablediffusionlitigation.com/

BBC News. (2023). “AI image creator faces UK and US legal challenges.” BBC Technology. 18 January 2023. https://www.bbc.com/news/technology-64285227

Butterick, Matthew. (2023). “Stable Diffusion litigation.” Announcement blog post. 16 January 2023. https://stablediffusionlitigation.com/

Colorado State Fair. (2022). “2022 Fine Arts Competition Results: Digital Arts / Digitally-Manipulated Photography.” https://coloradostatefair.com/wp-content/uploads/2022/08/2022-Fine-Arts-First-Second-Third.pdf

Goold, Patrick. (2023). Quoted in BBC News. “AI image creator faces UK and US legal challenges.” 18 January 2023.

LAION (Large-scale Artificial Intelligence Open Network). (2022). “LAION-5B: A new era of open large-scale multi-modal datasets.” Dataset documentation. https://laion.ai/

MoMA (Museum of Modern Art). (2025). “Sasha Stiles: A LIVING POEM.” Exhibition information. September 2025-Spring 2026. https://www.moma.org/calendar/exhibitions/5839

Mostaque, Emad. (2022). Quoted in multiple sources regarding Stable Diffusion training data size.

Palmer, RJ. (2022). Twitter post regarding AI art tools and artist livelihoods. August 2022.

Peters, Craig. (2023). Quoted in BBC News. “AI image creator faces UK and US legal challenges.” 18 January 2023.

Robak, Olga. (2022). Quoted in The Pueblo Chieftain and The New York Times regarding Colorado State Fair competition rules and judging.

Roose, Kevin. (2022). “An A.I.-Generated Picture Won an Art Prize. Artists Aren't Happy.” The New York Times. 2 September 2022. https://www.nytimes.com/2022/09/02/technology/ai-artificial-intelligence-artists.html

Stability AI. (2022). “Stable Diffusion Public Release.” Company announcement. 22 August 2022. https://stability.ai/news/stable-diffusion-public-release

Vincent, James. (2022). “An AI-generated artwork's state fair victory fuels arguments over 'what art is'.” The Verge. 1 September 2022. https://www.theverge.com/2022/9/1/23332684/ai-generated-art-blob-opera-dall-e-midjourney

Vincent, James. (2023). “AI art tools Stable Diffusion and Midjourney targeted with copyright lawsuit.” The Verge. 16 January 2023. https://www.theverge.com/2023/1/16/23557098/generative-ai-art-copyright-legal-lawsuit-stable-diffusion-midjourney-deviantart

***

Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...