AI Trains on Your Art: Three Models That Could Pay You Back

When Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson filed their class action lawsuit against Anthropic in 2024, they joined a growing chorus of creators demanding answers to an uncomfortable question: if artificial intelligence companies are building billion-dollar businesses by training on creative works, shouldn't the artists who made those works receive something in return? In June 2025, they received an answer from U.S. District Judge William Alsup that left many in the creative community stunned: “The training use was a fair use,” he wrote, ruling that Anthropic's use of their books to train Claude was “exceedingly transformative.”
The decision underscored a stark reality facing millions of artists, writers, photographers, and musicians worldwide. Whilst courts continue debating whether AI training constitutes copyright infringement, technology companies are already scraping, indexing, and ingesting vast swathes of creative work at a scale unprecedented in human history. The LAION-5B dataset alone contains links to 5.85 billion image-text pairs scraped from the web, many without the knowledge or consent of their creators.
But amidst the lawsuits and the polarised debates about fair use, a more practical conversation is emerging: regardless of what courts ultimately decide, what practical models could fairly compensate artists whose work informs AI training sets? And more importantly, what legal and technical barriers must be addressed to implement these models at scale? Several promising frameworks are beginning to take shape, from collective licensing organisations modelled on the music industry to blockchain-based micropayment systems and opt-in contribution platforms. Understanding these models and their challenges is essential for anyone seeking to build a more equitable future for AI and creativity.
The Collective Licensing Model
When radio emerged in the 1920s, it created an impossible administrative problem: how could thousands of broadcasters possibly negotiate individual licences with every songwriter whose music they played? The solution came through collective licensing organisations like ASCAP and BMI, which pooled rights from millions of creators and negotiated blanket licences on their behalf. Today, these organisations handle approximately 38 million musical works, collecting fees from everyone from Spotify to shopping centres and distributing royalties to composers without requiring individual contracts for every use.
This model has inspired the most significant recent development in AI training compensation: the Really Simple Licensing (RSL) Standard, announced in September 2025 by a coalition including Reddit, Yahoo, Medium, and dozens of other major publishers. The RSL protocol represents the first unified framework for extracting payment from AI companies, allowing publishers to embed licensing terms directly into robots.txt files. Rather than simply blocking crawlers or allowing unrestricted access, sites can now demand subscription fees, per-crawl charges, or compensation each time an AI model references their work.
The RSL Collective operates as a non-profit clearinghouse, similar to how ASCAP and BMI pool musicians' rights. Publishers join without cost, but the collective handles negotiations and royalty distribution across potentially millions of sites. The promise is compelling: instead of individual creators negotiating with dozens of AI companies, a single organisation wields collective bargaining power.
Yet the model faces significant hurdles. Most critically, no major AI company has agreed to honour the RSL standard. OpenAI, Anthropic, Google, and Meta continue to train models using data scraped from the web, relying on fair use arguments rather than licensing agreements. Without enforcement mechanisms, collective licensing remains optional, and AI companies have strong financial incentives to avoid it. Training GPT-4 reportedly cost over $100 million; adding licensing fees could significantly increase those costs.
The U.S. Copyright Office's May 2025 report on AI training acknowledged these challenges whilst endorsing the voluntary licensing approach. The report noted that whilst collective licensing through Collective Management Organisations (CMOs) could “reduce the logistical burden of negotiating with numerous copyright owners,” small rights holders often view their collective license compensation as insufficient, whilst “the entire spectrum of rights holders often regard government-established rates of compulsory licenses as too low.”
The international dimension adds further complexity. Collective licensing organisations operate under national legal frameworks with varying powers and mandates. Coordinating licensing across jurisdictions would require unprecedented cooperation between organisations with different governance structures, legal obligations, and technical infrastructures. When an AI model trains on content from dozens of countries, each with its own copyright regime, determining who owes what to whom becomes extraordinarily complex.
Moreover, the collective licensing model developed for music faces challenges when applied to other creative works. Music licensing benefits from clear units of measurement (plays, performances) and relatively standardised usage patterns. AI training is fundamentally different: works are ingested once during training, then influence model outputs in ways that may be impossible to trace to specific sources. How do you count uses when a model has absorbed millions of images but produces outputs that don't directly reproduce any single one?
Opt-In Contribution Systems
Whilst collective licensing attempts to retrofit existing rights management frameworks onto AI training, opt-in contribution systems propose a more fundamental inversion: instead of assuming AI companies can use everything unless creators opt out, start from the premise that nothing is available for training unless creators explicitly opt in.
The distinction matters enormously. Tech companies have promoted opt-out approaches as a workable compromise. Stability AI, for instance, partnered with Spawning.ai to create “Have I Been Trained,” allowing artists to search for their works in datasets and request exclusion. Over 80 million artworks have been opted out through this tool. But that represents a tiny fraction of the 2.3 billion images in Stable Diffusion's training data, and the opt-out only applies to future versions. Once an algorithm trains on certain data, that data cannot be removed retroactively.
The problems with opt-out systems are both practical and philosophical. A U.S. study on data privacy preferences found that 88% of companies failed to respect user opt-out preferences. Moreover, an artist may successfully opt out from their own website, but their works may still appear in datasets if posted on Instagram or other platforms that haven't opted out. And it's unreasonable to expect individual creators to notify hundreds or thousands of AI service providers about opt-out preferences.
Opt-in systems flip this default. Under this framework, artists would choose whether to include their work in training sets under structured agreements, similar to how musicians opt into platforms like Spotify. If an AI-driven product becomes successful, contributing artists could receive substantial compensation through various payment models: one-time fees for dataset inclusion, revenue-sharing percentages tied to model performance, or tiered compensation based on how frequently specific works influence outputs.
Stability AI's CEO Prem Akkaraju signalled a shift in this direction in 2025, telling the Financial Times that a marketplace for artists to opt in and upload their art for licensed training will happen, with artists receiving compensation. Shutterstock pioneered one version of this model in 2021, establishing a Contributor Fund that compensates artists whose work appears in licensed datasets used to train AI models. The company's partnership with OpenAI provides training data drawn from Shutterstock's library, with earnings distributed to hundreds of thousands of contributors. Significantly, only about 1% of contributors have chosen to opt out of data deals.
Yet this model faces challenges. Individual payouts remain minuscule for most contributors because image generation models train on hundreds of millions of images. Unless a particular artist's work demonstrably influences model outputs in measurable ways, determining fair compensation becomes arbitrary. Getty Images took a different approach, using content from its own platform to build proprietary generative AI models, with revenue distributed equally between its AI partner Bria and the data owners and creators.
The fundamental challenge for opt-in systems is achieving sufficient scale. Generative models require enormous, diverse datasets to function effectively. If only a fraction of available creative work is opted in, will the resulting models match the quality of those trained on scraped web data? And if opt-in datasets command premium prices whilst scraped data remains free (or legally defensible under fair use), market forces may drive AI companies toward the latter.
Micropayment Mechanisms
Both collective licensing and opt-in systems face a common problem: they require upfront agreements about compensation before training begins. Micropayment mechanisms propose a different model: pay creators each time their work is accessed, whether during initial training, model fine-tuning, or ongoing crawling for updated data.
Cloudflare demonstrated one implementation in 2025 with its Pay Per Crawl system, which allows AI companies to pay per crawl or be blocked. The mechanism uses the HTTP 402 status code (“Payment Required”) to implement automated payments: when a crawler requests access, it either pays the set price upfront or receives a payment-required response. This creates a marketplace where publishers define rates and AI firms decide whether the data justifies the cost.
The appeal of micropayments lies in their granularity. Instead of guessing the value of content in advance, publishers can set prices reflecting actual demand. For creators, this theoretically enables ongoing passive income as AI companies continually crawl the web for updated training data. Canva established a $200 million fund implementing a variant of this model, compensating creators who contribute to the platform's stock programme and allow their content for AI training.
Blockchain-based implementations promise to take micropayments further. Using cryptocurrencies like Bitcoin SV, creators could monetise data streams with continuous, automated compensation. Blockchain facilitates seamless token transfer from creators to developers whilst supporting fractional ownership. NFT smart contracts offer another mechanism for automated royalties: when artists mint NFTs, they can programme a “creator share” into the contract, typically 5-10% of future resale values, which execute automatically on-chain.
Yet micropayment systems face substantial technical and economic barriers. Transaction costs remain critical: if processing a payment costs more than the payment itself, the system collapses. Traditional financial infrastructure charges fees that make sub-cent transactions economically unviable. Whilst blockchain advocates argue that cryptocurrencies solve this through minimal transaction fees, widespread blockchain adoption faces regulatory uncertainty, environmental concerns about energy consumption, and user experience friction.
Attribution represents an even thornier problem. Micropayments require precisely tracking which works contribute to which model behaviours. But generative models don't work through direct copying; they learn statistical patterns across millions of examples. When DALL-E generates an image, which of the billions of training images “contributed” to that output? The computational challenge of maintaining such provenance at scale is formidable.
Furthermore, micropayment systems create perverse incentives. If AI companies must pay each time they access content, they're incentivised to scrape everything once, store it permanently, and never access the original source again. Without robust legal frameworks mandating micropayments and technical mechanisms preventing circumvention, voluntary adoption seems unlikely.
Copyright Complexity in a Fragmented World
Even the most elegant compensation models founder without legal frameworks that support or mandate them. Yet copyright law, designed for different technologies and business models, struggles to accommodate AI training. The challenges operate at multiple levels: ambiguous statutory language, inconsistent judicial interpretation, and fundamental tensions between exclusive rights and fair use exceptions.
The fair use doctrine epitomises this complexity. Judge Alsup's June 2025 ruling in Bartz v. Anthropic found that using books to train Claude was “exceedingly transformative” because the model learns patterns rather than reproducing text. Yet just months earlier, in Thomson Reuters v. ROSS Intelligence, Judge Bibas rejected fair use for AI training, concluding that using Westlaw headnotes to train a competing legal research product wasn't transformative. The distinction appears to turn on market substitution, but this creates uncertainty.
The U.S. Copyright Office's May 2025 report concluded that “there will not be a single answer regarding whether the unauthorized use of copyright materials to train AI models is fair use.” The report suggested a spectrum: noncommercial research training that doesn't enable reproducing original works in outputs likely qualifies as fair use, whilst copying expressive works from pirated sources to generate unrestricted competing content when licensing is available may not.
This lack of clarity creates enormous practical challenges. If courts eventually rule that AI training constitutes fair use across most contexts, compensation becomes entirely voluntary. Conversely, if courts rule broadly against fair use for AI training, compensation becomes mandatory, but the specific mechanisms remain undefined.
International variations multiply these complexities exponentially. The EU's text and data mining (TDM) exception permits reproduction and extraction of lawfully accessible copyrighted content for research and commercial purposes, provided rightsholders haven't opted out. The EU AI Act requires general-purpose AI model providers to implement policies respecting copyright law and to identify and respect opt-out reservations expressed through machine-readable means.
Significantly, the AI Act applies these obligations extraterritorially. Article 53.1© states that “Any provider placing a general-purpose AI model on the Union market should comply with this obligation, regardless of the jurisdiction in which the copyright-relevant acts underpinning the training of those general-purpose AI models take place.” This attempts to close a loophole where AI companies train models in permissive jurisdictions, then deploy them in more restrictive markets.
Japan and Singapore have adopted particularly permissive approaches. Japan's Article 30-4 allows exploitation of works “in any way and to the extent considered necessary” for non-expressive purposes, applying to commercial generative AI training and leading Japan to be called a “machine learning paradise.” Singapore's Copyright Act Amendment of 2021 introduced a computational data analysis exception allowing commercial use, provided users have lawful access.
These divergent national approaches create regulatory arbitrage opportunities. AI companies can strategically locate training operations in jurisdictions with broad exceptions, insulating themselves from copyright liability whilst deploying models globally. Without greater international harmonisation, implementing any compensation model at scale faces insurmountable fragmentation.
The Provenance Problem
Legal frameworks establish what compensation models are permitted or required, but technical infrastructure determines whether they're practically implementable. The single greatest technical barrier to fair compensation is provenance: reliably tracking which works contributed to which models and how those contributions influenced outputs.
The problem begins at data collection. Foundation models train on massive datasets assembled through web scraping, often via intermediaries like Common Crawl. LAION, the organisation behind datasets used to train Stable Diffusion, creates indexes by parsing Common Crawl's HTML for image tags and treating alt-text attributes as captions. Crucially, LAION stores only URLs and metadata, not the images themselves. When a model trains on LAION-5B's 5.85 billion image-text pairs, tracking specific contributions requires following URL chains that may break over time.
MIT's Data Provenance Initiative has conducted large-scale audits revealing systemic documentation failures: datasets are “inconsistently documented and poorly understood,” with creators “widely sourcing and bundling data without tracking or vetting their original sources, creator intentions, copyright and licensing status, or even basic composition and properties.” License misattribution is rampant, with one study finding license omission rates exceeding 68% and error rates around 50% on widely used dataset hosting sites.
Proposed technical solutions include metadata frameworks, cryptographic verification, and blockchain-based tracking. The Content Authenticity Initiative (CAI), founded by Adobe, The New York Times, and Twitter, promotes the Coalition for Content Provenance and Authenticity (C2PA) standard for provenance metadata. By 2025, the initiative reached 5,000 members, with Content Credentials being integrated into cameras from Leica, Nikon, Canon, Sony, and Panasonic, as well as content editors and newsrooms.
Sony announced the PXW-Z300 in July 2025, the world's first camcorder with C2PA standard support for video. This “provenance at capture” approach embeds verifiable metadata from the moment content is created. Yet C2PA faces limitations: it provides information about content origin and editing history, but not necessarily how that content influenced model behaviour.
Zero-knowledge proofs offer another avenue: they allow verifying data provenance without exposing underlying content, enabling rightsholders to confirm their work was used for training whilst preserving model confidentiality. Blockchain-based solutions extend these concepts through immutable ledgers and smart contracts. But blockchain faces significant adoption barriers: regulatory uncertainty around cryptocurrencies, substantial energy consumption, and user experience complexity.
Perhaps most fundamentally, even perfect provenance tracking during training doesn't solve the attribution problem for outputs. Generative models learn statistical patterns from vast datasets, producing novel content that doesn't directly copy any single source. Determining which training images contributed how much to a specific output isn't a simple accounting problem; it's a deep question about model internals that current AI research cannot fully answer.
When Jurisdiction Meets the Jurisdictionless
Even if perfect provenance existed and legal frameworks mandated compensation, enforcement across borders poses perhaps the most intractable challenge. Copyright is territorial: by default, it restricts infringing conduct only within respective national jurisdictions. AI training is inherently global: data scraped from servers in dozens of countries, processed by infrastructure distributed across multiple jurisdictions, used to train models deployed worldwide.
Legal scholars have identified a fundamental loophole: “There is a loophole in the international copyright system that would permit large-scale copying of training data in one country where this activity is not infringing. Once the training is done and the model is complete, developers could then make the model available to customers in other countries, even if the same training activities would have been infringing if they had occurred there.”
OpenAI demonstrated this dynamic in defending against copyright claims in India's Delhi High Court, arguing it cannot be accused of infringement because it operates in a different jurisdiction and does not store or train data in India, despite its models being trained on materials sourced globally including from India.
The EU attempted to address this through extraterritorial application of copyright compliance obligations to any provider placing general-purpose AI models on the EU market, regardless of where training occurred. This represents an aggressive assertion of regulatory jurisdiction, but its enforceability against companies with no EU presence remains uncertain.
Harmonising enforcement through international agreements faces political and economic obstacles. Countries compete for AI industry investment, creating incentives to maintain permissive regimes. Japan and Singapore's liberal copyright exceptions reflect strategic decisions to position themselves as AI development hubs. The Berne Convention and TRIPS Agreement provide frameworks for dispute resolution, but they weren't designed for AI-specific challenges.
Practically, the most effective enforcement may come through market access restrictions. If major markets like the EU and U.S. condition market access on demonstrating compliance with compensation requirements, companies face strong incentives to comply regardless of where training occurs. Trade agreements offer another enforcement lever: if copyright violations tied to AI training are framed as trade issues, WTO dispute resolution mechanisms could address them.
Building Workable Solutions
Given these legal, technical, and jurisdictional challenges, what practical steps could move toward fairer compensation? Several recommendations emerge from examining current initiatives and barriers.
First, establish interoperable standards for provenance and licensing. The proliferation of incompatible systems (C2PA, blockchain solutions, RSL, proprietary platforms) creates fragmentation. Industry coalitions should prioritise interoperability, ensuring that provenance metadata embedded by cameras and editing software can be read by datasets, respected by AI training pipelines, and verified by compensation platforms.
Second, expand opt-in platforms with transparent, tiered compensation. Shutterstock's Contributor Fund demonstrates that creators will participate when terms are clear and compensation reasonable. Platforms should offer tiered licensing: higher payments for exclusive high-quality content, moderate rates for non-exclusive inclusion, minimum rates for participation in large-scale datasets.
Third, support collective licensing organisations with statutory backing. Voluntary collectives face adoption challenges when AI companies can legally avoid them. Governments should consider statutory licensing schemes for AI training, similar to mechanical licenses in music, where rates are set through administrative processes and companies must participate.
Fourth, mandate provenance and transparency for deployed models. The EU AI Act's requirements for general-purpose AI providers to publish summaries of training content should be adopted globally and strengthened. Mandates should include specific provenance information: which datasets were used, where they originated, what licensing terms applied, and whether rightsholders opted out.
Fifth, fund research on technical solutions for output attribution. Governments, industry consortia, and research institutions should invest in developing methods for tracing model outputs back to specific training inputs. Whilst perfect attribution may be impossible, improving from current baselines would enable more sophisticated compensation models.
Sixth, harmonise international copyright frameworks through new treaties or Berne Convention updates. The WIPO should convene negotiations on AI-specific provisions addressing training data, establishing minimum compensation standards, clarifying TDM exception scope, and creating mechanisms for cross-border licensing and enforcement.
Seventh, create market incentives for ethical AI training. Governments could offer tax incentives, research grants, or procurement preferences to AI companies demonstrating proper licensing and compensation. Industry groups could establish certification programmes verifying AI models were trained on ethically sourced data.
Eighth, establish pilot programmes testing different compensation models at scale. Rather than attempting to impose single solutions globally, support diverse experiments: collective licensing in music and news publishing, opt-in platforms for visual arts, micropayment systems for scientific datasets.
Ninth, build bridges between stakeholder communities. AI companies, creator organisations, legal scholars, technologists, and policymakers often operate in silos. Regular convenings bringing together diverse perspectives can identify common ground. The Content Authenticity Summit's model of uniting standards bodies, industry, and creators demonstrates how cross-stakeholder collaboration can drive progress.
Tenth, recognise that perfect systems are unattainable and imperfect ones are necessary. No compensation model will satisfy everyone. The goal should not be finding the single optimal solution but creating an ecosystem of options that together provide better outcomes than the current largely uncompensated status quo.
Building Compensation Infrastructure for an AI-Driven Future
When Judge Alsup ruled that training Claude on copyrighted books constituted fair use, he acknowledged that courts “have never confronted a technology that is both so transformative yet so potentially dilutive of the market for the underlying works.” This encapsulates the central challenge: AI training is simultaneously revolutionary and derivative, creating immense value whilst building on the unconsented work of millions.
Yet the conversation is shifting. The RSL Standard, Shutterstock's Contributor Fund, Stability AI's evolving position, the EU AI Act's transparency requirements, and proliferating provenance standards all signal recognition that the status quo is unsustainable. Creators cannot continue subsidising AI development through unpaid training data, and AI companies cannot build sustainable businesses on legal foundations that may shift beneath them.
The models examined here (collective licensing, opt-in contribution systems, and micropayment mechanisms) each offer partial solutions. Collective licensing provides administrative efficiency and bargaining power but requires statutory backing. Opt-in systems respect creator autonomy but face scaling challenges. Micropayments offer precision but demand technical infrastructure that doesn't yet exist at scale.
The barriers are formidable: copyright law's territorial nature clashes with AI training's global scope, fair use doctrine creates unpredictability, provenance tracking technologies lag behind modern training pipelines, and international harmonisation faces political obstacles. Yet none of these barriers are insurmountable. Standards coalitions are building provenance infrastructure, courts are beginning to delineate fair use boundaries, and legislators are crafting frameworks balancing creator rights and innovation incentives.
What's required is sustained commitment from all stakeholders. AI companies must recognise that sustainable business models require legitimacy that uncompensated training undermines. Creators must engage pragmatically, acknowledging that maximalist positions may prove counterproductive whilst articulating clear minimum standards. Policymakers must navigate between protecting creators and enabling innovation. Technologists must prioritise interoperability, transparency, and attribution.
The stakes extend beyond immediate financial interests. How societies resolve the compensation question will shape AI's trajectory and the creative economy's future. If AI companies can freely appropriate creative works without payment, creative professions may become economically unsustainable, reducing the diversity of new creative production that future AI systems would train on. Conversely, if compensation requirements become so burdensome that only the largest companies can comply, AI development concentrates further.
The fairest outcomes will emerge from recognising AI training as neither pure infringement demanding absolute prohibition nor pure fair use permitting unlimited free use, but rather as a new category requiring new institutional arrangements. Just as radio prompted collective licensing organisations and digital music led to new streaming royalty mechanisms, AI training demands novel compensation structures tailored to its unique characteristics.
Building these structures is both urgent and ongoing. It's urgent because training continues daily on vast scales, with each passing month making retrospective compensation more complicated. It's ongoing because AI technology continues evolving, and compensation models must adapt accordingly. The perfect solution doesn't exist, but workable solutions do. The question is whether stakeholders can muster the collective will, creativity, and compromise necessary to implement them before the window of opportunity closes.
The artists whose work trained today's AI models deserve compensation. The artists whose work will train tomorrow's models deserve clear frameworks ensuring fair treatment from the outset. Whether we build those frameworks will determine not just the economic sustainability of creative professions, but the legitimacy and social acceptance of AI technologies reshaping how humans create, communicate, and imagine.
References & Sources
- AI and copyright: exploring exceptions for text and data mining
- AI Companies Prevail in Path-Breaking Decisions on Fair Use | Crowell & Moring LLP
- AI Training and Copyright Infringement: Solutions from Asia | TechPolicy.Press
- AI's Passport Problem | Davis Wright Tremaine
- All The Photo Companies That Have Struck Licensing Deals With AI Firms | PetaPixel
- Artists can now opt out of generative AI. It's not enough.
- ASCAP & BMI Music Licensing Guide | Soundsuit
- ASCAP, BMI and SOCAN Announce Alignment on AI Registration Policies
- Berne Convention for the Protection of Literary and Artistic Works
- Blockchain Royalties: How NFTs Can Be Programmed to Reward Creators | Algorand
- Bringing transparency to the data used to train artificial intelligence | MIT Sloan
- Can NFTs Crack Royalties And Give More Value To Artists? | Consensys
- Cloudflare's Pay Per Crawl Micropayment System
- Content Authenticity Initiative
- Copyright Office Weighs In on AI Training and Fair Use | Skadden
- Creator Groups Appeal for Action on Violations of the Berne Convention
- Cross-Border Copyright Enforcement in the Age of Streaming and AI | PatentPC
- Data Authenticity, Consent, & Provenance for AI are all broken
- Exploring 12 Million Images Used to Train Stable Diffusion – Waxy.org
- Fair Use and AI Training: Two Recent Decisions | Skadden
- Federal judge rules in AI company Anthropic's favor | NPR
- Is there a way to pay content creators whose work is used to train AI?
- LAION-5B: A NEW ERA OF OPEN LARGE-SCALE MULTI-MODAL DATASETS
- LAION vs Kneschke: Building public datasets is covered by the TDM exception
- Licensing for AI Training: Insights from the U.S. Copyright Office Report | Medium
- New RSL Web Standard and Collective Rights Organization
- Opt-Out Approaches to AI Training: A False Compromise
- Part 3: Generative AI Training – U.S. Copyright Office
- Provenance Tracking in Large-Scale Machine Learning Systems
- Reflecting on the 2025 Content Authenticity Summit at Cornell Tech
- Shutterstock expands deal with OpenAI | TechCrunch
- Shutterstock Expands Partnership with OpenAI
- Shutterstock's New Generative AI Tool Will Pay Artists
- Stability AI Reverses Stance: Calls for Licensing and Compensation
- Training Generative AI Models on Copyrighted Works Is Fair Use
- UK Music Chief Outlines Concerns Over “Opting Out” of AI Training
- Unlocking AI's economic potential through BSV micropayments
- What are NFT royalties? | Coinbase
- What is open data? How Common Crawl and LAION shape open source AI training
- 5,000 members: building momentum for a more trustworthy digital world

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk