The Impossible Promise: Why AI Can't Truly Forget Your Data

In December 2024, the European Data Protection Board gathered in Brussels to wrestle with a question that sounds deceptively simple: Can artificial intelligence forget? The board's Opinion 28/2024, released on 18 December, attempted to provide guidance on when AI models could be considered “anonymous” and how personal data rights apply to these systems. Yet beneath the bureaucratic language lay an uncomfortable truth—the very architecture of modern AI makes the promise of data deletion fundamentally incompatible with how these systems actually work.

The stakes couldn't be higher. Large language models like ChatGPT, Claude, and Gemini have been trained on petabytes of human expression scraped from the internet, often without consent. Every tweet, blog post, forum comment, and academic paper became training data for systems that now shape everything from medical diagnoses to hiring decisions. As Seth Neel, Assistant Professor at Harvard Business School and head of the Trustworthy AI Lab, explains, “Machine unlearning is really about computation more than anything else. It's about efficiently removing the influence of that data from the model without having to retrain it from scratch.”

But here's the catch: unlike a traditional database where you can simply delete a row, AI models don't store information in discrete, removable chunks. They encode patterns across billions of parameters, each one influenced by millions of data points. Asking an AI to forget specific information is like asking a chef to remove the salt from a baked cake—theoretically possible if you start over, practically impossible once it's done.

The California Experiment

In September 2024, California became the first state to confront this paradox head-on. Assembly Bill 1008, signed into law by Governor Gavin Newsom on 28 September, expanded the definition of “personal information” under the California Privacy Rights Act to include what lawmakers called “abstract digital formats”—model weights, tokens, and other outputs derived from personal data. The law, which took effect on 1 January 2025, grants Californians the right to request deletion of their data even after it's been absorbed into an AI model's neural pathways.

The legislation sounds revolutionary on paper. For the first time, a major jurisdiction legally recognised that AI models contain personal information in their very structure, not just in their training datasets. But the technical reality remains stubbornly uncooperative. As Ken Ziyu Liu, a PhD student at Stanford who authored “Machine Unlearning in 2024,” notes in his influential blog post from May 2024, “Evaluating unlearning on LLMs had been more of an art than science. The key issue has been the desperate lack of datasets and benchmarks for unlearning evaluation.”

The California Privacy Protection Agency, which voted to support the bill, acknowledged these challenges but argued that technical difficulty shouldn't exempt companies from privacy obligations. Yet critics point out that requiring companies to retrain massive models after each deletion request could cost millions of pounds and consume enormous computational resources—effectively making compliance economically unfeasible for all but the largest tech giants.

The European Paradox

Across the Atlantic, European regulators have been grappling with similar contradictions. The General Data Protection Regulation's Article 17, the famous “right to be forgotten,” predates the current AI boom by several years. When it was written, erasure meant something straightforward: find the data, delete it, confirm it's gone. But AI has scrambled these assumptions entirely.

The EDPB's December 2024 opinion attempted to thread this needle by suggesting that AI models should be assessed for anonymity on a “case by case basis.” If a model makes it “very unlikely” to identify individuals or extract their personal data through queries, it might be considered anonymous and thus exempt from deletion requirements. But this raises more questions than it answers. How unlikely is “very unlikely”? Who makes that determination? And what happens when adversarial attacks can coax models into revealing training data they supposedly don't “remember”?

Reuben Binns, Associate Professor at Oxford University's Department of Computer Science and former Postdoctoral Research Fellow in AI at the UK's Information Commissioner's Office, has spent years studying these tensions between privacy law and technical reality. His research on contextual integrity and data protection reveals a fundamental mismatch between how regulations conceptualise data and how AI systems actually process information.

Meanwhile, the Hamburg Data Protection Authority has taken a controversial stance, maintaining that large language models don't contain personal data at all and therefore aren't subject to deletion rights. This position directly contradicts California's approach and highlights the growing international fragmentation in AI governance.

The Unlearning Illusion

The scientific community has been working overtime to solve what they call the “machine unlearning” problem. In 2024 alone, researchers published dozens of papers proposing various techniques: gradient-based methods, data attribution algorithms, selective retraining protocols. Google DeepMind's Eleni Triantafillou, a senior research scientist who co-organised the first NeurIPS Machine Unlearning Challenge in 2023, has been at the forefront of these efforts.

Yet even the most promising approaches come with significant caveats. Triantafillou's 2024 paper “Are we making progress in unlearning?” reveals a sobering reality: current unlearning methods often fail to completely remove information, can degrade model performance unpredictably, and may leave traces that sophisticated attacks can still exploit. The paper, co-authored with researchers including Peter Kairouz and Fabian Pedregosa from Google DeepMind, suggests that true unlearning might require fundamental architectural changes to how we build AI systems.

The challenge becomes even more complex when dealing with foundation models—the massive, general-purpose systems that underpin most modern AI applications. These models learn abstract representations that can encode information about individuals in ways that are nearly impossible to trace or remove. A model might not explicitly “remember” that John Smith lives in Manchester, but it might have learned patterns from thousands of social media posts that allow it to make accurate inferences about John Smith when prompted correctly.

The Privacy Theatre

OpenAI's approach to data deletion requests reveals the theatrical nature of current “solutions.” The company allows users to request deletion of their personal data and offers an opt-out from training. According to their data processing addendum, API customer data is retained for a maximum of thirty days before automatic deletion. Chat histories can be deleted, and conversations with chat history disabled are removed after thirty days.

But what does this actually accomplish? The data used to train GPT-4 and other models is already baked in. Deleting your account or opting out today doesn't retroactively remove your influence from models trained yesterday. It's like closing the stable door after the horse has not only bolted but has been cloned a million times and distributed globally.

This performative compliance extends across the industry. Companies implement deletion mechanisms that remove data from active databases while knowing full well that the same information persists in model weights, embeddings, and latent representations. They offer privacy dashboards and control panels that provide an illusion of agency while the underlying reality remains unchanged: once your data has been used to train a model, removing its influence is computationally intractable at scale.

The unlearning debate has collided head-on with copyright law in ways that nobody fully anticipated. When The New York Times filed its landmark lawsuit against OpenAI and Microsoft on 27 December 2023, it didn't just seek compensation—it demanded something far more radical: the complete destruction of all ChatGPT datasets containing the newspaper's copyrighted content. This extraordinary demand, if granted by federal judge Sidney Stein, would effectively require OpenAI to “untrain” its models, forcing the company to rebuild from scratch using only authorised content.

The Times' legal team believes their articles represent one of the largest sources of copyrighted text in ChatGPT's training data, with the latest GPT models trained on trillions of words. In March 2025, Judge Stein rejected OpenAI's motion to dismiss, allowing the copyright infringement claims to proceed to trial. The stakes are astronomical—the newspaper seeks “billions of dollars in statutory and actual damages” for what it calls the “unlawful copying and use” of its journalism.

But the lawsuit has exposed an even deeper conflict about data preservation and privacy. The Times has demanded that OpenAI “retain consumer ChatGPT and API customer data indefinitely”—a requirement that OpenAI argues “fundamentally conflicts with the privacy commitments we have made to our users.” This creates an extraordinary paradox: copyright holders demand permanent data retention for litigation purposes, while privacy advocates and regulations require data deletion. The two demands are mutually exclusive, yet both are being pursued through the courts simultaneously.

OpenAI's defence rests on the doctrine of “fair use,” with company lawyer Joseph Gratz arguing that ChatGPT “isn't a document retrieval system. It is a large language model.” The company maintains that regurgitating entire articles “is not what it is designed to do and not what it does.” Yet the Times has demonstrated instances where ChatGPT can reproduce substantial portions of its articles nearly verbatim—evidence that the model has indeed “memorised” copyrighted content.

This legal conflict has exposed a fundamental tension: copyright holders want their content removed from AI systems, while privacy advocates want personal information deleted. Both demands rest on the assumption that selective forgetting is technically feasible. Ken Liu's research at Stanford highlights this convergence: “The field has evolved from training small convolutional nets on face images to training giant language models on pay-walled, copyrighted, toxic, dangerous, and otherwise harmful content, all of which we may want to 'erase' from the ML models.”

But the technical mechanisms for copyright removal and privacy deletion are essentially the same—and equally problematic. You can't selectively lobotomise an AI any more than you can unbake that cake. The models that power ChatGPT, Claude, and other systems don't have a delete key for specific memories. They have patterns, weights, and associations distributed across billions of parameters, each one shaped by the entirety of their training data.

The implications extend far beyond The New York Times. Publishers worldwide are watching this case closely, as are AI companies that have built their business models on scraping the open web. If the Times succeeds in its demand for dataset destruction, it could trigger an avalanche of similar lawsuits that would fundamentally reshape the AI industry. Conversely, if OpenAI prevails with its fair use defence, it could establish a precedent that essentially exempts AI training from copyright restrictions—a outcome that would devastate creative industries already struggling with digital disruption.

The DAIR Perspective

Timnit Gebru, founder of the Distributed Artificial Intelligence Research Institute (DAIR), offers a different lens through which to view the unlearning problem. Since launching DAIR in December 2021 after her controversial departure from Google, Gebru has argued that the issue isn't just technical but structural. The concentration of AI development in a handful of massive corporations means that decisions about data use, model training, and deletion capabilities are made by entities with little accountability to the communities whose data they consume.

“One of the biggest issues in AI right now is exploitation,” Gebru noted in a 2024 interview. She points to content moderators in Nairobi earning as little as $1.50 per hour to clean training data for tech giants, and the millions of internet users whose creative output has been absorbed without consent or compensation. From this perspective, the inability to untrain models isn't a bug—it's a feature of systems designed to maximise data extraction while minimising accountability.

DAIR's research focuses on alternative approaches to AI development that prioritise community consent and local governance. Rather than building monolithic models trained on everything and owned by no one, Gebru advocates for smaller, purpose-specific systems where data provenance and deletion capabilities are built in from the start. It's a radically different vision from the current paradigm of ever-larger models trained on ever-more data.

The Contextual Integrity Problem

Helen Nissenbaum, the Andrew H. and Ann R. Tisch Professor at Cornell Tech and architect of the influential “contextual integrity” framework for privacy, brings yet another dimension to the unlearning debate. Her theory, which defines privacy not as secrecy but as appropriate information flow within specific contexts, suggests that the problem with AI isn't just that it can't forget—it's that it doesn't understand context in the first place.

“We say appropriate data flows serve the integrity of the context,” Nissenbaum explains. When someone shares information on a professional networking site, they have certain expectations about how that information will be used. When the same data gets scraped to train a general-purpose AI that might be used for anything from generating marketing copy to making employment decisions, those contextual boundaries are shattered.

Speaking at the 6th Annual Symposium on Applications of Contextual Integrity in September 2024, Nissenbaum argued that the massive scale of AI systems makes contextual appropriateness impossible to maintain. “Digital systems have been big for a while, but they've become more massive with AI, and even more so with generative AI. People feel an onslaught, and they may express their concern as, 'My privacy is violated.'”

The contextual integrity framework suggests that even perfect unlearning wouldn't solve the deeper problem: AI systems that treat all information as fungible training data, stripped of its social context and meaning. A medical record, a love letter, a professional résumé, and a casual tweet all become undifferentiated tokens in the training process. No amount of post-hoc deletion can restore the contextual boundaries that were violated in the collection and training phase.

The Hugging Face Approach

Margaret Mitchell, Chief Ethics Scientist at Hugging Face since late 2021, has been working on a different approach to the unlearning problem. Rather than trying to remove data from already-trained models, Mitchell's team focuses on governance and documentation practices that make models' limitations and training data transparent from the start.

Mitchell pioneered the concept of “Model Cards”—standardised documentation that accompanies AI models to describe their training data, intended use cases, and known limitations. This approach doesn't solve the unlearning problem, but it does something arguably more important: it makes visible what data went into a model and what biases or privacy risks might result.

“Open-source AI carries as many benefits, and as few harms, as possible,” Mitchell stated in her 2023 TIME 100 AI recognition. At Hugging Face, this philosophy translates into tools and practices that give users more visibility into and control over AI systems, even if perfect unlearning remains elusive. The platform's emphasis on reproducibility and transparency stands in stark contrast to the black-box approach of proprietary systems.

Mitchell's work on data governance at Hugging Face includes developing methods to track data provenance, identify potentially problematic training examples, and give model users tools to understand what information might be encoded in the systems they're using. While this doesn't enable true unlearning, it does enable informed consent and risk assessment—prerequisites for any meaningful privacy protection in the AI age.

The Technical Reality Check

Let's be brutally specific about why unlearning is so difficult. Modern large language models like GPT-4 contain hundreds of billions of parameters. Each parameter is influenced by millions or billions of training examples. The information about any individual training example isn't stored in any single location—it's diffused across the entire network in subtle statistical correlations.

Consider a simplified example: if a model was trained on text mentioning “Sarah Johnson, a doctor in Leeds,” that information doesn't exist as a discrete fact the model can forget. Instead, it slightly adjusts thousands of parameters governing associations between concepts like “Sarah,” “Johnson,” “doctor,” “Leeds,” and countless related terms. These adjustments influence how the model processes entirely unrelated text. Removing Sarah Johnson's influence would require identifying and reversing all these minute adjustments—without breaking the model's ability to understand that doctors exist in Leeds, that people named Sarah Johnson exist, or any of the other valid patterns learned from other sources.

Seth Neel's research at Harvard has produced some of the most rigorous work on this problem. His 2021 paper “Descent-to-Delete: Gradient-Based Methods for Machine Unlearning” demonstrated that even with complete access to a model's architecture and training process, selectively removing information is computationally expensive and often ineffective. His more recent work on “Adaptive Machine Unlearning” shows that the problem becomes exponentially harder as models grow larger and training datasets become more complex.

“The initial research explorations were primarily driven by Article 17 of GDPR since 2014,” notes Ken Liu in his comprehensive review of the field. “A decade later in 2024, user privacy is no longer the only motivation for unlearning.” The field has expanded to encompass copyright concerns, safety issues, and the removal of toxic or harmful content. Yet despite this broadened focus and increased research attention, the fundamental technical barriers remain largely unchanged.

The Computational Cost Crisis

Even if perfect unlearning were technically possible, the computational costs would be staggering. Training GPT-4 reportedly cost over $100 million in computational resources. Retraining the model to remove even a small amount of data would require similar resources. Now imagine doing this for every deletion request from millions of users.

The environmental implications are equally troubling. Training large AI models already consumes enormous amounts of energy, contributing significantly to carbon emissions. If companies were required to retrain models regularly to honour deletion requests, the environmental cost could be catastrophic. We'd be burning fossil fuels to forget information—a dystopian irony that highlights the unsustainability of current approaches.

Some researchers have proposed “sharding” approaches where models are trained on separate data partitions that can be individually retrained. But this introduces its own problems: reduced model quality, increased complexity, and the fundamental issue that information still leaks across shards through shared preprocessing, architectural choices, and validation procedures.

The Regulatory Reckoning

As 2025 unfolds, regulators worldwide are being forced to confront the gap between privacy law's promises and AI's technical realities. The European Data Protection Board's December 2024 opinion attempted to provide clarity but mostly highlighted the contradictions. The board suggested that legitimate interest might serve as a legal basis for AI training in some cases—such as cybersecurity or conversational agents—but only with strict necessity and rights balancing.

Yet the opinion also acknowledged that determining whether an AI model contains personal data requires case-by-case assessment by data protection authorities. Given the thousands of AI models being developed and deployed, this approach seems practically unworkable. It's like asking food safety inspectors to individually assess every grain of rice for contamination.

California's AB 1008 takes a different approach, simply declaring that AI models do contain personal information and must be subject to deletion rights. But the law provides little guidance on how companies should actually implement this requirement. The result is likely to be a wave of litigation as courts try to reconcile legal mandates with technical impossibilities.

The Italian Garante's €15 million fine against OpenAI in December 2024, announced just two days after the EDPB opinion, signals that European regulators are losing patience with technical excuses. The fine was accompanied by corrective measures requiring OpenAI to implement age verification and improve transparency about data processing. But notably absent was any requirement for true unlearning capabilities—perhaps a tacit acknowledgment that such requirements would be unenforceable.

The Adversarial Frontier

The unlearning problem becomes even more complex when we consider adversarial attacks. Research has repeatedly shown that even when models appear to have “forgotten” information, sophisticated prompting techniques can often extract it anyway. This isn't surprising—if the information has influenced the model's parameters, traces remain even after attempted deletion.

In 2024, researchers demonstrated that large language models could be prompted to regenerate verbatim text from their training data, even when companies claimed that data had been “forgotten.” These extraction attacks work because the information isn't truly gone—it's just harder to access through normal means. It's like shredding a document but leaving the shreds in a pile; with enough effort, the original can be reconstructed.

This vulnerability has serious implications for privacy and security. If deletion mechanisms can be circumvented through clever prompting, then compliance with privacy laws becomes meaningless. A company might honestly believe it has deleted someone's data, only to have that data extracted by a malicious actor using adversarial techniques.

The Innovation Imperative

Despite these challenges, innovation in unlearning continues at a breakneck pace. The NeurIPS 2023 Machine Unlearning Challenge, co-organised by Eleni Triantafillou and Fabian Pedregosa from Google DeepMind, attracted hundreds of submissions proposing novel approaches. The 2024 follow-up work, “Are we making progress in unlearning?” provides a sobering assessment: while techniques are improving, fundamental barriers remain.

Some of the most promising approaches involve building unlearning capabilities into models from the start, rather than trying to add them retroactively. This might mean architectural changes that isolate different types of information, training procedures that maintain deletion indexes, or hybrid systems that combine parametric models with retrievable databases.

But these solutions require starting over—something the industry seems reluctant to do given the billions already invested in current architectures. It's easier to promise future improvements than to acknowledge that existing systems are fundamentally incompatible with privacy rights.

The Alternative Futures

What if we accepted that true unlearning is impossible and designed systems accordingly? This might mean:

Expiring Models: AI systems that are automatically retrained on fresh data after a set period, with old versions retired. This wouldn't enable targeted deletion but would ensure that old information eventually ages out.

Federated Architectures: Instead of centralised models trained on everyone's data, federated systems where computation happens locally and only aggregated insights are shared. Apple's on-device Siri processing hints at this approach.

Purpose-Limited Systems: Rather than general-purpose models trained on everything, specialised systems trained only on consented, contextually appropriate data. This would mean many more models but much clearer data governance.

Retrieval-Augmented Generation: Systems that separate the knowledge base from the language model, allowing for targeted updates to the retrievable information while keeping the base model static.

Each approach has trade-offs. Expiring models waste computational resources. Federated systems can be less capable. Purpose-limited systems reduce flexibility. Retrieval augmentation can be manipulated. There's no perfect solution, only different ways of balancing capability against privacy.

The Trust Deficit

Perhaps the deepest challenge isn't technical but social: the erosion of trust between AI companies and the public. When OpenAI claims to delete user data while knowing that information persists in model weights, when Google promises privacy controls that don't actually control anything, when Meta talks about user choice while training on decades of social media posts—the gap between rhetoric and reality becomes a chasm.

This trust deficit has real consequences. EU regulators are considering increasingly stringent requirements. California's legislation is likely just the beginning of state-level action in the US. China is developing its own AI governance framework with potentially strict data localisation requirements. The result could be a fragmented global AI landscape where models can't be deployed across borders.

Margaret Mitchell at Hugging Face argues that rebuilding trust requires radical transparency: “We need to document not just what data went into models, but what data can't come out. We need to be honest about limitations, clear about capabilities, and upfront about trade-offs.”

The Human Cost

Behind every data point in an AI training set is a human being. Someone wrote that blog post, took that photo, composed that email. When we talk about the impossibility of unlearning, we're really talking about the impossibility of giving people control over their digital selves.

Consider the practical implications. A teenager's embarrassing social media posts from years ago, absorbed into training data, might influence AI systems for decades. A writer whose work was scraped without permission watches as AI systems generate derivative content, with no recourse for removal. A patient's medical forum posts, intended to help others with similar conditions, become part of systems used by insurance companies to assess risk.

Timnit Gebru's DAIR Institute has documented numerous cases where AI training has caused direct harm to individuals and communities. “The model fits all doesn't work,” Gebru argues. “It is a fictional argument that feeds a monoculture on tech and a tech monopoly.” Her research shows that the communities most likely to be harmed by AI systems—marginalised groups, Global South populations, minority language speakers—are also least likely to have any say in how their data is used.

The Global Fragmentation Crisis

The impossibility of AI unlearning is creating a regulatory Tower of Babel. Different jurisdictions are adopting fundamentally incompatible approaches to the same problem, threatening to fragment the global AI landscape into isolated regional silos.

In the United States, California's AB 1008 represents just the beginning. Other states are drafting their own AI privacy laws, each with different definitions of what constitutes personal information in an AI context and different requirements for deletion. Texas is considering legislation that would require AI companies to maintain “deletion capabilities” without defining what that means technically. New York's proposed AI accountability act includes provisions for “algorithmic discrimination audits” that would require examining how models treat different demographic groups—impossible without access to the very demographic data that privacy laws say should be deleted.

The European Union, meanwhile, is developing the AI Act alongside GDPR, creating a dual regulatory framework that companies must navigate. The December 2024 EDPB opinion suggests that models might be considered anonymous if they meet certain criteria, but member states are interpreting these criteria differently. France's CNIL has taken a relatively permissive approach, while Germany's data protection authorities demand stricter compliance. The Hamburg DPA's position that LLMs don't contain personal data at all stands in stark opposition to Ireland's DPA, which requested the EDPB opinion precisely because it believes they do.

China is developing its own approach, focused less on individual privacy rights and more on data sovereignty and national security. The Cyberspace Administration of China has proposed regulations requiring that AI models trained on Chinese citizens' data must store that data within China and provide government access for “security reviews.” This creates yet another incompatible framework that would require completely separate models for the Chinese market.

The result is a nightmare scenario for AI developers: models that are legal in one jurisdiction may be illegal in another, not because of their outputs but because of their fundamental architecture. A model trained to comply with California's deletion requirements might violate China's data localisation rules. A system designed for GDPR compliance might fail to meet emerging requirements in India or Brazil.

The Path Forward

So where does this leave us? The technical reality is clear: true unlearning in large AI models is currently impossible and likely to remain so with existing architectures. The legal landscape is fragmenting as different jurisdictions take incompatible approaches. The trust between companies and users continues to erode.

Yet this isn't cause for despair but for action. Acknowledging the impossibility of unlearning with current technology should spur us to develop new approaches, not to abandon privacy rights. This might mean:

Regulatory Honesty: Laws that acknowledge technical limitations while still holding companies accountable for data practices. This could include requirements for transparency, consent, and purpose limitation even if deletion isn't feasible. Rather than demanding the impossible, regulations could focus on preventing future misuse of data already embedded in models.

Technical Innovation: Continued research into architectures that enable better data governance, even if perfect unlearning remains elusive. The work of researchers like Seth Neel, Eleni Triantafillou, and Ken Liu shows that progress, while slow, is possible. New architectures might include built-in “forgetfulness” through techniques like differential privacy or temporal degradation of weights.

Social Negotiation: Broader conversations about what we want from AI systems and what trade-offs we're willing to accept. Helen Nissenbaum's contextual integrity framework provides a valuable lens for these discussions. We need public forums where technologists, ethicists, policymakers, and citizens can wrestle with these trade-offs together.

Alternative Models: Support for organisations like DAIR that are exploring fundamentally different approaches to AI development, ones that prioritise community governance over scale. This might mean funding for public AI infrastructure, support for cooperative AI development models, or requirements that commercial AI companies contribute to public AI research.

Harm Mitigation: Since we can't remove data from trained models, we should focus on preventing and mitigating harms from that data's presence. This could include robust output filtering, use-case restrictions, audit requirements, and liability frameworks that hold companies accountable for harms caused by their models' outputs rather than their training data.

The promise that AI can forget your data is, at present, an impossible one. But impossible promises have a way of driving innovation. The question isn't whether AI will ever truly be able to forget—it's whether we'll develop systems that make forgetting unnecessary by respecting privacy from the start.

As we stand at this crossroads, the choices we make will determine not just the future of privacy but the nature of the relationship between humans and artificial intelligence. Will we accept systems that absorb everything and forget nothing, or will we demand architectures that respect the human need for privacy, context, and control?

The answer won't come from Silicon Valley boardrooms or Brussels regulatory chambers alone. It will emerge from the collective choices of developers, regulators, researchers, and users worldwide. The impossible promise of AI unlearning might just be the catalyst we need to reimagine what artificial intelligence could be—not an omniscient oracle that never forgets, but a tool that respects the very human need to be forgotten.


References and Further Information

Academic Publications

Regulatory Documents

Institutional Reports

News and Analysis


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0000-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...