The Right To Vanish: When Your Digital Self Refuses to Die

The European Union's General Data Protection Regulation enshrines something called the “right to be forgotten”. Codified in Article 17, this legal provision allows individuals to request that companies erase their personal data under specific circumstances. You can demand that Facebook delete your account, that Google remove your search history, that any number of digital platforms wipe your digital footprint from their servers. The process isn't always seamless, but the right exists, backed by regulatory teeth that can impose fines of up to 4 per cent of a company's global annual revenue for non-compliance.

But what happens when your data isn't just stored in a database somewhere, waiting to be deleted with the press of a button? What happens when it's been dissolved into the mathematical substrate of an artificial intelligence model, transformed into weights and parameters that no longer resemble the original information? Can you delete yourself from an AI's brain?

This question has evolved from theoretical curiosity to urgent policy debate. As AI companies have scraped vast swathes of the internet to train increasingly powerful models, millions of people have discovered their words, images, and creative works embedded in systems they never consented to join. The tension between individual rights and technological capability has never been starker.

The Technical Reality of AI Training

To understand why deleting data from AI systems presents unique challenges, you need to grasp how these systems learn. Modern AI models, particularly large language models and image generators, train on enormous datasets by adjusting billions or even trillions of parameters. During training, the model doesn't simply memorise your data; it extracts statistical patterns and relationships, encoding them into a complex mathematical structure.

Each model carries a kind of neural fingerprint: a diffused imprint of the data it has absorbed. Most individual traces dissolve into patterns, yet fragments can persist, resurfacing through model vulnerabilities or rare examples where memorisation outweighs abstraction.

When GPT-4 learned to write, it analysed hundreds of billions of words from books, websites, and articles. When Stable Diffusion learned to generate images, it processed billions of image-text pairs from across the internet. The training process compressed all that information into model weights, creating what amounts to a statistical representation of patterns rather than a database of original content.

This fundamental architecture creates a problem: there's no straightforward way to locate and remove a specific piece of training data after the fact. Unlike a traditional database where you can search for a record and delete it, AI models don't maintain clear mappings between their outputs and their training inputs. The information has been transformed, distributed, and encoded across millions of interconnected parameters.

Some researchers have developed “machine unlearning” techniques that attempt to remove the influence of specific training data without retraining the entire model from scratch. These methods work by fine-tuning the model to “forget” certain information whilst preserving its other capabilities. However, these approaches remain largely experimental, computationally expensive, and imperfect. Verifying that data has truly been forgotten, rather than merely obscured, presents another layer of difficulty.

The UK's Information Commissioner's Office, in its guidance on AI and data protection updated in March 2023, acknowledges these technical complexities whilst maintaining that data protection principles still apply. The ICO emphasises accountability and governance, requiring organisations to consider how they'll handle data subject rights during the design phase of AI systems, not as an afterthought. This forward-looking approach recognises that retrofitting privacy protections into AI systems after deployment is far more difficult than building them in from the start.

Whilst the technical challenges are substantial, the legal framework ostensibly supports data deletion rights. Article 17 of the GDPR establishes that individuals have the right to obtain erasure of personal data “without undue delay” under several conditions. These include when the data is no longer necessary for its original purpose, when consent is withdrawn, when the data has been unlawfully processed, or when the data subject objects to processing without overriding legitimate grounds.

However, the regulation also specifies exceptions that create significant wiggle room. Processing remains permissible for exercising freedom of expression and information, for compliance with legal obligations, for reasons of public interest, for archiving purposes in the public interest, for scientific or historical research purposes, or for the establishment, exercise, or defence of legal claims. These carve-outs, particularly the research exception, have become focal points in debates about AI training.

These exceptions create significant grey areas when applied to AI training. Companies building AI systems frequently argue that their activities fall under scientific research exceptions or that removing individual data points would seriously impair their research objectives. The regulation explicitly acknowledges in Article 89 that the right to erasure may be limited “in so far as the right referred to in paragraph 1 is likely to render impossible or seriously impair the achievement of the objectives of that processing.”

The European Data Protection Board has not issued comprehensive guidance specifically addressing the right to erasure in AI training contexts, leaving individual data protection authorities to interpret how existing regulations apply to these novel technological realities. This regulatory ambiguity means that whilst the right to erasure theoretically extends to AI training data, its practical enforcement remains uncertain.

The regulatory picture grows more complicated when you look beyond Europe. In the United States, comprehensive federal data protection legislation doesn't exist, though several states have enacted their own privacy laws. California's Consumer Privacy Act and its successor, the California Privacy Rights Act, grant deletion rights similar in spirit to the GDPR's right to be forgotten, though with different implementation requirements and enforcement mechanisms. These state-level regulations create a patchwork of protections that AI companies must navigate, particularly when operating across jurisdictions.

The Current State of Opt-Out Mechanisms

Given these legal ambiguities and technical challenges, what practical options do individuals actually have? Recognising the growing concern about AI training, some companies have implemented opt-out mechanisms that allow individuals to request exclusion of their data from future model training. These systems vary dramatically in scope, accessibility, and effectiveness.

OpenAI, the company behind ChatGPT and GPT-4, offers a data opt-out form that allows individuals to request that their personal information not be used to train OpenAI's models. However, this mechanism only applies to future training runs, not to models already trained. If your data was used to train GPT-4, it remains encoded in that model's parameters indefinitely. The opt-out prevents your data from being used in GPT-5 or subsequent versions, but it doesn't erase your influence on existing systems.

Stability AI, which developed Stable Diffusion, faced significant backlash from artists whose work was used in training without permission or compensation. The company eventually created Have I Been Trained, a search tool that allows artists to check if their work appears in training datasets and request its removal from future training. Again, this represents a forward-looking solution rather than retroactive deletion.

These opt-out mechanisms, whilst better than nothing, highlight a fundamental asymmetry: companies can use your data to train a model, derive commercial value from that model for years, and then honour your deletion request only for future iterations. You've already been incorporated into the system; you're just preventing further incorporation.

Moreover, the Electronic Frontier Foundation has documented numerous challenges with AI opt-out processes in their 2023 reporting on the subject. Many mechanisms require technical knowledge to implement, such as modifying website metadata files to block AI crawlers. This creates accessibility barriers that disadvantage less technically sophisticated users. Additionally, some AI companies ignore these technical signals or scrape data from third-party sources that don't respect opt-out preferences.

The fragmentation of opt-out systems creates additional friction. There's no universal registry where you can request removal from all AI training datasets with a single action. Instead, you must identify each company separately, navigate their individual processes, and hope they comply. For someone who's published content across multiple platforms over years or decades, comprehensive opt-out becomes practically impossible.

Consider the challenge facing professional photographers, writers, or artists whose work appears across hundreds of websites, often republished without their direct control. Even if they meticulously opt out from major AI companies, their content might be scraped from aggregator sites, social media platforms, or archived versions they can't access. The distributed nature of internet content means that asserting control over how your data is used for AI training requires constant vigilance and technical sophistication that most people simply don't possess.

The Economic and Competitive Dimensions

Beyond the technical and legal questions lies a thornier issue: money. The question of data deletion from AI training sets intersects uncomfortably with competitive dynamics in the AI industry. Training state-of-the-art AI models requires enormous datasets, substantial computational resources, and significant financial investment. Companies that have accumulated large, high-quality datasets possess a considerable competitive advantage.

If robust deletion rights were enforced retroactively, requiring companies to retrain models after removing individual data points, the costs could be astronomical. Training a large language model can cost millions of dollars in computational resources alone. Frequent retraining to accommodate deletion requests would multiply these costs dramatically, potentially creating insurmountable barriers for smaller companies whilst entrenching the positions of well-resourced incumbents.

This economic reality creates perverse incentives. Companies may oppose strong deletion rights not just to protect their existing investments but to prevent competitors from building alternative models with more ethically sourced data. If established players can maintain their edge through models trained on data obtained before deletion rights became enforceable, whilst new entrants struggle to accumulate comparable datasets under stricter regimes, the market could calcify around incumbents.

However, this argument cuts both ways. Some researchers and advocates contend that forcing companies to account for data rights would incentivise better data practices from the outset. If companies knew they might face expensive retraining obligations, they would have stronger motivations to obtain proper consent, document data provenance, and implement privacy-preserving training techniques from the beginning.

The debate also extends to questions of fair compensation. If AI companies derive substantial value from training data whilst data subjects receive nothing, some argue this constitutes a form of value extraction that deletion rights alone cannot address. This perspective suggests that deletion rights should exist alongside compensation mechanisms, creating economic incentives for companies to negotiate licensing rather than simply scraping data without permission.

Technical Solutions on the Horizon

If current systems can't adequately handle data deletion, what might future ones look like? The technical community hasn't been idle in addressing these challenges. Researchers across industry and academia are developing various approaches to make AI systems more compatible with data subject rights.

Machine unlearning represents the most direct attempt to solve the deletion problem. These techniques aim to remove the influence of specific training examples from a trained model without requiring complete retraining. Early approaches achieved this through careful fine-tuning, essentially teaching the model to produce outputs as if the deleted data had never been part of the training set. More recent research has explored methods that maintain “influence functions” during training, creating mathematical tools for estimating and reversing the impact of individual training examples.

Research published in academic journals in 2023 documented progress in making machine unlearning more efficient and verifiable, though researchers acknowledged significant limitations. Complete verification that data has been truly forgotten remains an open problem, and unlearning techniques can degrade model performance if applied too broadly or repeatedly. The computational costs, whilst lower than full retraining, still present barriers to widespread implementation, particularly for frequent deletion requests.

Privacy-preserving machine learning techniques offer a different approach. Rather than trying to remove data after training, these methods aim to train models in ways that provide stronger privacy guarantees from the beginning. Differential privacy, for instance, adds carefully calibrated noise during training to ensure that the model's outputs don't reveal information about specific training examples. Federated learning allows models to train across decentralised data sources without centralising the raw data, potentially enabling AI development whilst respecting data minimisation principles.

However, these techniques come with trade-offs. Differential privacy typically requires larger datasets or accepts reduced model accuracy to achieve its privacy guarantees. Federated learning introduces substantial communication and coordination overhead, making it unsuitable for many applications. Neither approach fully resolves the deletion problem, though they may make it more tractable by limiting how much information about specific individuals becomes embedded in model parameters in the first place.

Watermarking and fingerprinting techniques represent yet another avenue. These methods embed detectable patterns in training data that persist through the training process, allowing verification of whether specific data was used to train a model. Whilst this doesn't enable deletion, it could support enforcement of data rights by making it possible to prove unauthorised use.

The development of these technical solutions reflects a broader recognition within the research community that AI systems need to be architected with data rights in mind from the beginning, not retrofitted later. This principle of “privacy by design” appears throughout data protection regulations, including the GDPR's Article 25, which requires controllers to implement appropriate technical and organisational measures to ensure data protection principles are integrated into processing activities.

However, translating this principle into practice for AI systems remains challenging. The very characteristics that make AI models powerful—their ability to generalise from training data, to identify subtle patterns, to make inferences beyond explicit training examples—are also what makes respecting individual data rights difficult. A model that couldn't extract generalisable patterns would be useless, but a model that does extract such patterns necessarily creates something new from individual data points, complicating questions of ownership and control.

Real-World Controversies and Test Cases

The abstract debate about AI training data rights has manifested in numerous real-world controversies that illustrate the tensions and complexities at stake. These cases provide concrete examples of how theoretical questions about consent, ownership, and control play out when actual people discover their data embedded in commercial AI systems.

Artists have been at the forefront of pushing back against unauthorised use of their work in AI training. Visual artists discovered that image generation models could replicate their distinctive styles, effectively allowing anyone to create “new” works in the manner of specific living artists without compensation or attribution. This wasn't hypothetical—users could prompt models with artist names and receive images that bore unmistakable stylistic similarities to the original artists' portfolios.

The photography community faced similar challenges. Stock photography databases and individual photographers' portfolios were scraped wholesale to train image generation models. Photographers who had spent careers developing technical skills and artistic vision found their work reduced to training data for systems that could generate competing images. The economic implications are substantial: why license a photograph when an AI can generate something similar for free?

Writers and journalists have grappled with comparable issues regarding text generation models. News organisations that invest in investigative journalism, fact-checking, and original reporting saw their articles used to train models that could then generate news-like content without the overhead of actual journalism. The circular logic becomes apparent: AI companies extract value from journalistic work to build systems that could eventually undermine the economic viability of journalism itself.

These controversies have sparked litigation in multiple jurisdictions. Copyright infringement claims argue that training AI models on copyrighted works without permission violates intellectual property rights. Privacy-based claims invoke data protection regulations like the GDPR, arguing that processing personal data for AI training without adequate legal basis violates individual rights. The outcomes of these cases will significantly shape the landscape of AI development and data rights.

The legal questions remain largely unsettled. Courts must grapple with whether AI training constitutes fair use or fair dealing, whether the technical transformation of data into model weights changes its legal status, and how to balance innovation incentives against creator rights. Different jurisdictions may reach different conclusions, creating further fragmentation in global AI governance.

Beyond formal litigation, these controversies have catalysed broader public awareness about AI training practices. Many people who had never considered where AI capabilities came from suddenly realised that their own creative works, social media posts, or published writings might be embedded in commercial AI systems. This awareness has fuelled demand for greater transparency, better consent mechanisms, and meaningful deletion rights.

The Social Media Comparison

Comparing AI training datasets to social media accounts, as the framing question suggests, illuminates both similarities and critical differences. Both involve personal data processed by technology companies for commercial purposes. Both raise questions about consent, control, and corporate power. Both create network effects that make individual opt-out less effective.

However, the comparison also reveals important distinctions. When you delete a social media account, the data typically exists in a relatively structured, identifiable form. Facebook can locate your profile, your posts, your photos, and remove them from active systems (though backup copies and cached versions may persist). The deletion is imperfect but conceptually straightforward.

AI training data, once transformed into model weights, doesn't maintain this kind of discrete identity. Your contribution has become part of a statistical amalgam, blurred and blended with countless other inputs. Deletion would require either destroying the entire model (affecting all users) or developing sophisticated unlearning techniques (which remain imperfect and expensive).

This difference doesn't necessarily mean deletion rights shouldn't apply to AI training data. It does suggest that implementation requires different technical approaches and potentially different policy frameworks than those developed for traditional data processing.

The social media comparison also highlights power imbalances that extend across both contexts. Large technology companies accumulate data at scales that individual users can barely comprehend, then deploy that data to build systems that shape public discourse, economic opportunities, and knowledge access. Whether that data lives in a social media database or an AI model's parameters, the fundamental questions about consent, accountability, and democratic control remain similar.

The Path Forward

So where does all this leave us? Several potential paths forward have emerged from ongoing debates amongst technologists, policymakers, and civil society organisations. Each approach presents distinct advantages and challenges.

One model emphasises enhanced transparency and consent mechanisms at the data collection stage. Under this approach, AI companies would be required to clearly disclose when web scraping or data collection is intended for AI training purposes, allowing data subjects to make informed decisions about participation. This could be implemented through standardised metadata protocols, clear terms of service, and opt-in consent for particularly sensitive data. The UK's ICO has emphasised accountability and governance in its March 2023 guidance update, signalling support for this proactive approach.

However, critics note that consent-based frameworks struggle when data has already been widely published. If you posted photos to a public website in 2015, should AI companies training models in 2025 need to obtain your consent? Retroactive consent is practically difficult and creates uncertainty about the usability of historical data.

A second approach focuses on strengthening and enforcing deletion rights using both regulatory pressure and technical innovation. This model would require AI companies to implement machine unlearning capabilities, invest in privacy-preserving training methods, and maintain documentation sufficient to respond to deletion requests. Regular audits and substantial penalties for non-compliance would provide enforcement mechanisms.

The challenge here lies in balancing individual rights against the practical realities of AI development. If deletion rights are too broad or retroactive, they could stifle beneficial AI research. If they're too narrow or forward-looking only, they fail to address the harms already embedded in existing systems.

A third path emphasises collective rather than individual control. Some advocates argue that individual deletion rights, whilst important, insufficiently address the structural power imbalances of AI development. They propose data trusts, collective bargaining mechanisms, or public data commons that would give communities greater say in how data about them is used for AI training. This approach recognises that AI systems affect not just the individuals whose specific data was used, but entire communities and social groups.

These models could coexist rather than competing. Individual deletion rights might apply to clearly identifiable personal data whilst collective governance structures address broader questions about dataset composition and model deployment. Transparency requirements could operate alongside technical privacy protections. The optimal framework might combine elements from multiple approaches.

International Divergences and Regulatory Experimentation

Different jurisdictions are experimenting with varying regulatory approaches to AI and data rights, creating a global patchwork that AI companies must navigate. The European Union, through the GDPR and the forthcoming AI Act, has positioned itself as a global standard-setter emphasising fundamental rights and regulatory oversight. The GDPR's right to erasure establishes a baseline that, whilst challenged by AI's technical realities, nonetheless asserts the principle that individuals should maintain control over their personal data.

The United Kingdom, having left the European Union, has maintained GDPR-equivalent protections through the UK GDPR whilst signalling interest in “pro-innovation” regulatory reform. The ICO's March 2023 guidance update on AI and data protection reflects this balance, acknowledging technical challenges whilst insisting on accountability. The UK government has expressed intentions to embed fairness considerations into AI regulation, though comprehensive legislative frameworks remain under development.

The United States presents a more fragmented picture. Without federal privacy legislation, states have individually enacted varying protections. California's laws create deletion rights similar to European models, whilst other states have adopted different balances between individual rights and commercial interests. This patchwork creates compliance challenges for companies operating nationally, potentially driving pressure for federal standardisation.

China has implemented its own data protection frameworks, including the Personal Information Protection Law, which incorporates deletion rights alongside state priorities around data security and local storage requirements. The country's approach emphasises government oversight and aligns data protection with broader goals of technological sovereignty and social control.

These divergent approaches create both challenges and opportunities. Companies must navigate multiple regulatory regimes, potentially leading to lowest-common-denominator compliance or region-specific model versions. However, regulatory experimentation also enables learning from different approaches, potentially illuminating which frameworks best balance innovation, rights protection, and practical enforceability.

The lack of international harmonisation also creates jurisdictional arbitrage opportunities. AI companies might locate their training operations in jurisdictions with weaker data protection requirements, whilst serving users globally. This dynamic mirrors broader challenges in internet governance, where the borderless nature of digital services clashes with territorially bounded legal systems.

Some observers advocate for international treaties or agreements to establish baseline standards for AI development and data rights. The precedent of the GDPR influencing privacy standards globally suggests that coherent frameworks from major economic blocs can create de facto international standards, even without formal treaties. However, achieving consensus on AI governance among countries with vastly different legal traditions, economic priorities, and political systems presents formidable obstacles.

The regulatory landscape continues to evolve rapidly. The European Union's AI Act, whilst not yet fully implemented as of late 2025, represents an attempt to create comprehensive AI-specific regulations that complement existing data protection frameworks. Other jurisdictions are watching these developments closely, potentially adopting similar approaches or deliberately diverging to create competitive advantages. This ongoing regulatory evolution means that the answers to questions about AI training data deletion rights will continue shifting for years to come.

What This Means for You

Policy debates and technical solutions are all well and good, but what can you actually do right now? If you're concerned about your data being used to train AI systems, your practical options currently depend significantly on your jurisdiction, technical sophistication, and the specific companies involved.

For future data, you can take several proactive steps. Many AI companies offer opt-out forms or mechanisms to request that your data not be used in future training. The Electronic Frontier Foundation maintains resources documenting how to block AI crawlers through website metadata files, though this requires control over web content you've published. You can also be more selective about what you share publicly, recognising that public data is increasingly viewed as fair game for AI training.

For data already used in existing AI models, your options are more limited. If you're in the European Union or United Kingdom, you can submit data subject access requests and erasure requests under the GDPR or UK GDPR, though companies may invoke research exceptions or argue that deletion is technically impractical. These requests at least create compliance obligations and potential enforcement triggers if companies fail to respond appropriately.

You can support organisations advocating for stronger data rights and AI accountability. Groups like the Electronic Frontier Foundation, Algorithm Watch, and various digital rights organisations work to shape policy and hold companies accountable. Collective action creates pressure that individual deletion requests cannot.

You might also consider the broader context of consent and commercial data use. The AI training debate sits within larger questions about how the internet economy functions, who benefits from data-driven technologies, and what rights individuals should have over information about themselves. Engaging with these systemic questions, through political participation, consumer choices, and public discourse, contributes to shaping the long-term trajectory of AI development.

It's worth recognising that perfect control over your data in AI systems may be unattainable, but this doesn't mean the fight for data rights is futile. Every opt-out request, every regulatory complaint, every public discussion about consent and control contributes to shifting norms around acceptable data practices. Companies respond to reputational risks and regulatory pressures, even when individual enforcement is difficult.

The conversation about AI training data also intersects with broader debates about digital literacy and technological citizenship. Understanding how AI systems work, what data they use, and what rights you have becomes an essential part of navigating modern digital life. Educational initiatives, clearer disclosures from AI companies, and more accessible technical tools all play roles in empowering individuals to make informed choices about their data.

For creative professionals—writers, artists, photographers, musicians—whose livelihoods depend on their original works, the stakes feel particularly acute. Professional associations and unions have begun organising collective responses, negotiating with AI companies for licensing agreements or challenging training practices through litigation. These collective approaches may prove more effective than individual opt-outs in securing meaningful protections and compensation.

The Deeper Question

Beneath the technical and legal complexities lies a more fundamental question about what kind of digital society we want to build. The ability to delete yourself from an AI training dataset isn't simply about technical feasibility or regulatory compliance. It reflects deeper assumptions about autonomy, consent, and power in an age where data has become infrastructure.

This isn't just abstract philosophy. The decisions we make about AI training data rights will shape the distribution of power and wealth in the digital economy for decades. If a handful of companies can build dominant AI systems using data scraped without meaningful consent or compensation, they consolidate enormous market power. If individuals and communities gain effective control over how their data is used, that changes the incentive structures driving AI development.

Traditional conceptions of property and control struggle to map onto information that has been transformed, replicated, and distributed across systems. When your words become part of an AI's statistical patterns, have you lost something that should be returnable? Or has your information become part of a collective knowledge base that transcends individual ownership?

These philosophical questions have practical implications. If we decide that individuals should maintain control over their data even after it's transformed into AI systems, we're asserting a particular vision of informational autonomy that requires technical innovation and regulatory enforcement. If we decide that some uses of publicly available data for AI training constitute legitimate research or expression that shouldn't be constrained by individual deletion rights, we're making different choices about collective benefits and individual rights.

The social media deletion comparison helps illustrate these tensions. We've generally accepted that you should be able to delete your Facebook account because we understand it as your personal space, your content, your network. But AI training uses data differently, incorporating it into systems meant to benefit broad populations. Does that shift the calculus? Should it?

These aren't questions with obvious answers. Different cultural contexts, legal traditions, and value systems lead to different conclusions. What seems clear is that we're still very early in working out how fundamental rights like privacy, autonomy, and control apply to AI systems. The technical capabilities of AI have advanced far faster than our social and legal frameworks for governing them.

The Uncomfortable Truth

Should you be able to delete yourself from AI training datasets the same way you can delete your social media accounts? The honest answer is that we're still figuring out what that question even means, let alone how to implement it.

The right to erasure exists in principle in many jurisdictions, but its application to AI training data faces genuine technical obstacles that distinguish it from traditional data deletion. Current opt-out mechanisms offer limited, forward-looking protections rather than true deletion from existing systems. The economic incentives, competitive dynamics, and technical architectures of AI development create resistance to robust deletion rights.

Yet the principle that individuals should have meaningful control over their personal data remains vital. As AI systems become more powerful and more deeply embedded in social infrastructure, the question of consent and control becomes more urgent, not less. The solution almost certainly involves multiple complementary approaches: better technical tools for privacy-preserving AI and machine unlearning, clearer regulatory requirements and enforcement, more transparent data practices, and possibly collective governance mechanisms that supplement individual rights.

What we're really negotiating is the balance between individual autonomy and collective benefit in an age where the boundary between the two has become increasingly blurred. Your data, transformed into an AI system's capabilities, affects not just you but everyone who interacts with that system. Finding frameworks that respect individual rights whilst enabling beneficial technological development requires ongoing dialogue amongst technologists, policymakers, advocates, and affected communities.

The comparison to social media deletion is useful not because the technical implementation is the same, but because it highlights what's at stake: your ability to say no, to withdraw, to maintain some control over how information about you is used. Whether that principle can be meaningfully implemented in the context of AI training, and what trade-offs might be necessary, remain open questions that will shape the future of both AI development and individual rights in the digital age.


Sources and References

  1. European Commission. “General Data Protection Regulation (GDPR) Article 17: Right to erasure ('right to be forgotten').” Official Journal of the European Union, 2016. https://gdpr-info.eu/art-17-gdpr/

  2. Information Commissioner's Office (UK). “Guidance on AI and data protection.” Updated 15 March 2023. https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/

  3. Electronic Frontier Foundation. “Deeplinks Blog.” 2023. https://www.eff.org/deeplinks


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...