Neurotechnology Ethics Framework: Consensus Without Consequences

On 12 November 2025, UNESCO's General Conference did something unprecedented: it adopted the first global ethical framework for neurotechnology. The Recommendation on the Ethics of Neurotechnology, years in the making and drawing on more than 8,000 contributions from civil society, academia, and industry, establishes guidelines for technologies that can read, write, and modulate the human brain. It sounds like a victory for human rights in the digital age. Look closer, and the picture grows considerably more complicated.
The framework arrives at a peculiar moment. Investment in neurotechnology companies surged 700 per cent between 2014 and 2021, totalling 33.2 billion dollars according to UNESCO's own data. Brain-computer interfaces have moved from science fiction to clinical trials. Consumer devices capable of reading neural signals are sold openly online for a few hundred dollars. And the convergence of neurotechnology with artificial intelligence creates capabilities for prediction and behaviour modification that operate below the threshold of individual awareness. Against this backdrop, UNESCO has produced a document that relies entirely on voluntary national implementation, covers everything from invasive implants to wellness headbands, and establishes “mental privacy” as a human right without explaining how it will be enforced.
The question is not whether the framework represents good intentions. It clearly does. The question is whether good intentions, expressed through non-binding recommendations that countries may or may not translate into law, can meaningfully constrain technologies that are already being deployed in workplaces, schools, and consumer markets worldwide.
When Your Brain Becomes a Data Source
The neurotechnology landscape has transformed with startling speed. What began as therapeutic devices for specific medical conditions has expanded into a sprawling ecosystem of consumer products, workplace monitoring systems, and research tools. The global neurotechnology market is projected to grow from approximately 17.3 billion dollars in 2025 to nearly 53 billion dollars by 2034, according to Precedence Research, representing a compound annual growth rate exceeding 13 per cent.
Neuralink, Elon Musk's brain-computer interface company, received FDA clearance in 2023 to begin human trials. By June 2025, five individuals with severe paralysis were using Neuralink devices to control digital and physical devices with their thoughts. Musk announced that the company would begin “high-volume production” and move to “a streamlined, almost entirely automated surgical procedure” in 2026. The company extended its clinical programme into the United Kingdom, with patients at University College London Hospital and Newcastle reportedly controlling computers within hours of surgery.
Synchron, taking a less invasive approach through blood vessels rather than open-brain surgery, has developed a device that integrates Nvidia AI and the Apple Vision Pro headset. Paradromics received FDA approval in November 2025 for a clinical study evaluating speech restoration for people with paralysis. Morgan Stanley recently valued the brain-computer interface market at 400 billion dollars.
But the medical applications, however transformative, represent only part of the picture. Consumer neurotechnology has proliferated far beyond clinical settings. The Neurorights Foundation analysed the user agreements and privacy policies for 30 companies selling commercially available products and found that only one provided meaningful restrictions on how neural data could be employed or sold. Fewer than half encrypted their data or de-identified users.
Emotiv, a San Francisco-based company, sells wireless EEG headsets for around 500 dollars. The Muse headband, marketed as a meditation aid, has become one of the most popular consumer EEG devices worldwide. Companies including China's Entertech have accumulated millions of raw EEG recordings from individuals across the world, along with personal information, GPS signals, and device usage data. Their privacy policy makes plain that this information is collected and retained.
The capabilities of these devices are often underestimated. Non-invasive consumer devices measuring brain signals at the scalp can infer inner language, attention, emotion, sexual orientation, and arousal among other cognitive functions. As Marcello Ienca, Professor for Ethics of AI and Neuroscience at the Technical University of Munich and an appointed member of UNESCO's expert group, has observed: “When it comes to neurotechnology, we cannot afford this risk. This is because the brain is not just another source of information that irrigates the digital infosphere, but the organ that builds and enables our mind.”
The Centre for Future Generations reports that dedicated consumer neurotechnology firms now account for 60 per cent of the global landscape, outnumbering medical firms since 2018. Since 2010, consumer neurotechnology firms have proliferated more than four-fold compared with the previous 25 years. EEG and stimulation technologies are being embedded into wearables including headphones, earbuds, glasses, and wristbands. Consumer neurotech is shifting from a niche innovation to a pervasive feature of everyday digital ecosystems.
The UNESCO Framework's Ambitious Scope
UNESCO Director-General Audrey Azoulay described neurotechnology as a “new frontier of human progress” that demands strict ethical boundaries to protect the inviolability of the human mind. “There can be no neurodata without neurorights,” she stated when announcing the framework's development. The initiative builds on UNESCO's earlier work establishing a global framework on the ethics of artificial intelligence in 2021, positioning the organisation at the forefront of emerging technology governance.
The Recommendation that emerged from extensive consultation covers an extraordinarily broad range of technologies and applications. It addresses invasive devices requiring neurosurgery alongside consumer headbands. It covers medical applications with established regulatory pathways and wellness products operating in what researchers describe as an “essentially unregulated consumer marketplace.” It encompasses direct neural measurements and, significantly, the inferences that can be drawn from other biometric data.
This last point deserves attention. A September 2024 paper in the journal Neuron, co-authored by Nita Farahany of Duke University (who co-chaired UNESCO's expert group alongside French neuroscientist Hervé Chneiweiss), Patrick Magee, and Ienca, introduced the concept of “cognitive biometric data.” The paper defines this as “neural data, as well as other data collected from a given individual or group of individuals through other biometric and biosensor data,” which can “be processed and used to infer mental states.”
This definition extends protection beyond direct measurements of nervous system activity to include data from biosensors like heart rate monitors and eye trackers that can be processed to reveal cognitive and emotional states. The distinction matters because current privacy laws often protect direct neural data while leaving significant gaps for inferred mental states. Many consumers are entirely unaware that the fitness wearable on their wrist might be generating data that reveals far more about their mental state than their step count.
The UNESCO framework attempts to address this convergence. It calls for neural data to be classified as sensitive personal information. It prohibits coercive data practices, including conditioning access to services on neural data provision. It establishes strict workplace restrictions, requiring that neurotechnology use be strictly voluntary and opt-in, explicitly prohibiting its use for performance evaluation or punitive measures. It demands specific safeguards against algorithmic bias, cybersecurity threats, and manipulation arising from the combination of neurotechnology with artificial intelligence.
For children and young people, whose developing brains make them particularly susceptible, the framework advises against non-therapeutic use entirely. It establishes mental privacy as fundamental to personal identity and agency, defending individuals from manipulation and surveillance.
These are substantive provisions. They would, if implemented, significantly constrain how neurotechnology can be deployed. The operative phrase, however, is “if implemented.”
The Voluntary Implementation Problem
UNESCO recommendations are not binding international law. They represent what international lawyers call “soft law,” embodying political and moral authority without legal force. Member states must report on measures they have adopted, but the examination of such reports operates through institutional mechanisms that have limited capacity to compel compliance.
The precedent here is instructive. UNESCO's 2021 Recommendation on the Ethics of Artificial Intelligence was adopted by all 193 member states. It represented a historic agreement on fundamental values, principles, and policies for AI development. The Recommendation was celebrated as a landmark achievement in global technology governance. Three years later, implementation remains partial and uneven.
UNESCO developed a Readiness Assessment Methodology (RAM) to help countries assess their preparedness to implement the AI ethics recommendation. By 2025, this process had been piloted in approximately 60 countries. That represents meaningful progress, but also reveals the gap between adoption and implementation. A 2024 RAM analysis identified compliance and governance gaps in 78 per cent of participating nations. The organisation states it is “helping over 80 countries translate these principles into national law,” but helping is not the same as compelling.
The challenge grows more acute when considering that the countries most likely to adopt protective measures face potential competitive disadvantage. Nations that move quickly to implement strong neurotechnology regulation may find their industries at a disadvantage compared to jurisdictions that prioritise speed-to-market over safeguards.
This dynamic is familiar from other technology governance contexts. International political economy scholars have documented the phenomenon of regulatory competition, where jurisdictions lower standards to attract investment and economic activity. While some research questions whether this “race to the bottom” actually materialises in practice, the concern remains that strict unilateral regulation can create competitive pressures that undermine its own objectives.
China, for instance, has identified brain-computer interface technology as a strategic priority. The country's BCI industry reached 3.2 billion yuan (approximately 446 million dollars) in 2024, with projections showing growth to 5.58 billion yuan by 2027. Beijing's roadmap aims for BCI breakthroughs by 2027 and a globally competitive ecosystem by 2030. The Chinese government integrates its BCI initiatives into five-year innovation plans supported by multiple ministries, financing research whilst aligning universities, hospitals, and industry players under unified targets. While China has issued ethical guidelines for BCI research through the Ministry of Science and Technology in February 2024, analysis suggests the country currently has no legislative plan specifically for neurotechnology and may rely on interpretations of existing legal systems rather than bespoke neural data protection.
The United States presents a different challenge: regulatory fragmentation. As of mid-2025, four states had enacted laws regarding neural data. California amended its Consumer Privacy Act to classify neural data as sensitive personal information, effective January 2025. Colorado's law treats neural information as sensitive data and casts the widest net, safeguarding both direct measurements from the nervous system and algorithm-generated inferences like mood predictions. Minnesota has proposed standalone legislation that would apply to both private and governmental entities, prohibiting government entities from collecting brain data without informed consent and from interfering with individuals' decision-making when engaging with neurotechnology.
But this patchwork approach creates its own problems. US Senators have proposed the Management of Individuals' Neural Data Act (MIND Act), which would direct the Federal Trade Commission to study neural data practices and develop a blueprint for comprehensive national legislation. The very existence of such a proposal underscores the absence of federal standards. Meanwhile, at least 15 additional neural data privacy bills are pending in state legislatures across the country, each with different definitions, scopes, and enforcement mechanisms.
Into this regulatory patchwork, UNESCO offers guidelines that nations may or may not adopt, that may or may not be implemented effectively, and that may or may not prove enforceable even where adopted.
Chile's Test Case and Its Limits
Chile offers the most developed test case for how neurorights might work in practice. In October 2021, Chile became the first country to include neurorights in its constitution, enshrining mental privacy and integrity as fundamental rights. The legislation aimed to give personal brain data the same status as an organ, making it impossible to buy, sell, traffic, or manipulate.
In August 2023, Chile's Supreme Court issued a landmark ruling against Emotiv concerning neural data collected through the company's Insight device. Senator Guido Girardi Lavin had alleged that his brain data was insufficiently protected, arguing that Emotiv did not offer adequate privacy protections since users could only access or own their neural data by purchasing a paid licence. The Court found that Emotiv violated constitutional rights to physical and psychological integrity as well as privacy, ordering the company to delete all of Girardi's personal data.
The ruling was reported as a landmark decision for neurorights, the first time a court had enforced constitutional protection of brain data. It established that information obtained for various purposes “cannot be used finally for any purpose, unless the owner knew of and approved of it.” The court explicitly rejected Emotiv's argument that the data became “statistical” simply because it was anonymised.
Yet the case also revealed limitations. Some critics, including law professor Pablo Contreras of Chile's Central University, argued that the neurorights provision was irrelevant to the outcome, which could have been reached under existing data protection law. The debate continues over whether constitutional neurorights protections add substantive legal force or merely symbolic weight.
More fundamentally, Chile's approach depends on consistent enforcement by national courts against international companies. Emotiv was ordered to delete data and comply with Chilean law. But the company remains headquartered in San Francisco, subject primarily to US jurisdiction. Chile's constitutional provisions protect Chileans, but cannot prevent the same technologies from being deployed without equivalent restrictions elsewhere.
The Organisation of American States issued a Declaration on neuroscience, neurotechnologies, and human rights in 2021, followed by principles to align international standards with national frameworks. Brazil and Mexico are considering constitutional changes. But these regional developments, while encouraging, remain disconnected from the global framework UNESCO has attempted to establish.
The AI Convergence Challenge
The convergence of neurotechnology with artificial intelligence creates particularly acute governance challenges. AI systems can process neural data at scale, identify patterns invisible to human observers, and generate predictions about cognitive and emotional states. This combination produces capabilities that fundamentally alter the risk landscape.
A 2020 paper in Science and Engineering Ethics by academics examining this convergence noted that AI plays an increasingly central role in neuropsychiatric applications, particularly in prediction and analysis of neural recording data. When the identification of anomalous neural activity is mapped to behavioural or cognitive phenomena in clinical contexts, technologies developed for recording neural activity come to play a role in psychiatric assessment and diagnosis.
The ethical concerns extend beyond data collection to intervention. Deep brain stimulation modifies neural activity to diminish deleterious symptoms of diseases like Parkinson's. Closed-loop systems that adjust stimulation in response to detected neural states raise questions about human agency and control. The researchers argue that when action as the outcome of reasoning may be curtailed, and basic behavioural discrimination among stimuli is affected, great care should be taken in use of these technologies.
The UNESCO framework acknowledges these concerns, demanding specific safeguards against algorithmic bias, cybersecurity threats, and manipulation. But it provides limited guidance on how such safeguards should work in practice. When an AI system operating on neural data can predict behaviour or modify cognitive states in ways that operate below the threshold of conscious awareness, what does meaningful consent look like? How can individuals exercise rights over processes they cannot perceive?
The workplace context makes these questions concrete. Brain-monitoring neurotechnology is already used in mining, finance, and other industries. The technology can measure brain waves and make inferences about mental states including fatigue and focus. The United Kingdom's Information Commissioner's Office predicts it will be common in workplaces by the end of the decade. The market for workplace neurotechnology is predicted to grow to 21 billion dollars by 2026.
Research published in Frontiers in Human Dynamics examined the legal perspective on wearable neurodevices for workplace monitoring. The analysis found that employers could use brain data to assess cognitive functions, cognitive patterns, and even detect neuropathologies. Such data could serve for purposes including promotion, hiring, or dismissal. The study suggests that EU-level labour legislation should explicitly address neurotechnology, permitting its use only for safety purposes in exceptional cases such as monitoring employee fatigue in high-risk jobs.
The UNESCO framework calls for strict limitations on workplace neurotechnology, requiring voluntary opt-in and prohibiting use for performance evaluation. But voluntary opt-in in an employment context is a fraught concept. When neurotechnology monitoring becomes normalised in an industry, employees may face implicit pressure to participate. Those who refuse may find themselves at a disadvantage, even without explicit sanctions.
This dynamic, where formal choice exists alongside structural pressure, represents precisely the kind of subtle coercion that privacy frameworks struggle to address. The line between voluntary participation and effective compulsion can blur in ways that legal categories fail to capture.
Mental Privacy Without Enforcement Mechanisms
The concept of mental privacy sits at the heart of UNESCO's framework. The organisation positions it as fundamental to personal identity and agency, defending individuals from manipulation and surveillance. This framing has intuitive appeal. If any domain should remain inviolable, surely it is the human mind.
But establishing a right without enforcement mechanisms risks producing rhetoric without protection. International human rights frameworks depend ultimately on state implementation and domestic legal systems. When states lack the technical capacity, political will, or economic incentive to implement protections, the rights remain aspirational.
The neurorights movement emerged from precisely this concern. In 2017, Ienca and colleagues at ETH Zurich introduced the concept, arguing that protecting thoughts and mental processes is a fundamental human right that the drafters of the 1948 Universal Declaration of Human Rights could not have anticipated. Rafael Yuste, the Columbia University neuroscientist who helped initiate the US BRAIN Initiative in 2013 and founded the Neurorights Foundation in 2022, has been a leading advocate for updating human rights frameworks to address neurotechnology.
Yuste's foundation has achieved concrete successes, contributing to legislative protections in Chile, Colorado, and Brazil's state of Rio Grande do Sul. But Yuste himself has characterised these efforts as urgent responses to imminent threats. “Let's act before it's too late,” he told UNESCO's Courier publication, arguing that neurotechnology bypasses bodily filters to access the centre of mental activity.
The structural challenge remains: neurorights advocates are working jurisdiction by jurisdiction, building a patchwork of protections that varies in scope and enforcement capacity. UNESCO's global framework could, in principle, accelerate this process by establishing international consensus. But consensus on principles has not historically translated rapidly into harmonised legal protections.
The World Heritage Convention offers a partial analogy. Under that treaty, the prospect of a property being transferred to the endangered list, or removed entirely, can transform voluntary approaches into quasi-binding obligations. States value World Heritage status and will modify behaviour to retain it. But neurotechnology governance offers no equivalent mechanism. There is no elite status to protect, no list from which exclusion carries meaningful consequences. The incentives that make soft law effective in some domains are absent here.
The Framework's Deliberate Breadth
The UNESCO framework's comprehensive scope, covering everything from clinical implants to consumer wearables to indirect neural data inference, reflects a genuine dilemma in technology governance. Draw boundaries too narrowly, and technologies evolve around them. Define categories too specifically, and innovation outpaces regulatory categories.
But comprehensive scope creates its own problems. When a single framework addresses brain-computer interfaces requiring neurosurgery and fitness wearables sold at shopping centres, the governance requirements appropriate for one may be inappropriate for the other. The risk is that standards calibrated to high-risk applications prove excessive for low-risk ones, while standards appropriate for consumer devices prove inadequate for medical implants.
This concern is not hypothetical. The European Union's AI Act, adopted in 2024, has faced criticism for precisely this issue. The Act's risk-based classification system attempts to calibrate requirements to application contexts, but critics argue it excludes key applications from high-risk classifications while imposing significant compliance burdens on lower-risk uses.
The UNESCO neurotechnology framework similarly attempts a risk-sensitive approach, but its voluntary nature means that implementation will vary by jurisdiction and application context. Some nations may adopt stringent requirements across all neurotechnology applications. Others may focus primarily on medical devices while leaving consumer products largely unregulated. Still others may deprioritise neurotechnology governance entirely.
The result is not a global framework in any meaningful sense, but a menu of options from which nations may select according to their preferences, capacities, and incentive structures. This approach has virtues: flexibility, accommodation of diverse values, and respect for national sovereignty. But it also means that the protections available to individuals will depend heavily on where they live and which companies they interact with.
The Accountability Diffusion Question
Perhaps the most fundamental challenge is whether comprehensive frameworks ultimately diffuse accountability rather than concentrate it. When a single document addresses every stakeholder, from national governments to research organisations to private companies to civil society, does it clarify responsibilities or obscure them?
The UNESCO framework calls upon member states to implement its provisions through national law, to develop oversight mechanisms including regulatory sandboxes, and to support capacity building in lower and middle-income countries. It emphasises “global equity and solidarity,” particularly protecting developing nations from technological inequality. It calls upon the private sector to adopt responsible practices, implement transparency measures, and respect human rights throughout the neurotechnology lifecycle. It calls upon research institutions to maintain ethical standards and contribute to inclusive development.
These are reasonable expectations. But they are also distributed expectations. When everyone is responsible, no one bears primary accountability. The framework establishes what should happen without clearly specifying who must ensure it does.
Contrast this with approaches that concentrate responsibility. Chile's constitutional amendment placed obligations directly on entities collecting brain data, enforced through judicial review. Colorado's neural data law created specific compliance requirements with definable penalties. These approaches may be narrower in scope, but they create clear accountability structures.
The UNESCO framework, by operating at the level of international soft law addressed to multiple stakeholder categories, lacks this specificity. It establishes norms without establishing enforcement. It articulates rights without creating remedies. It expresses values without compelling their implementation.
This is not necessarily a failure. International soft law has historically contributed to norm development, gradually shaping behaviour and expectations even without binding force. The 2021 AI ethics recommendation may be achieving exactly this kind of influence, despite uneven implementation. Over time, the neurotechnology framework may similarly help establish baseline expectations that guide behaviour across jurisdictions.
But “over time” is a luxury that may not exist. The technologies are developing now. The data is being collected now. The convergence with AI systems is happening now. A framework that operates on the timescale of norm diffusion may prove inadequate for technologies operating on the timescale of quarterly product releases.
What Meaningful Governance Would Require
The UNESCO framework represents a significant achievement: international consensus that neurotechnology requires ethical governance, that mental privacy deserves protection, and that the convergence of brain-reading technologies with AI systems demands specific attention. These are not trivial accomplishments.
But the gap between consensus on principles and effective implementation remains vast. Meaningful neurotechnology governance would require several elements largely absent from the current framework.
First, it would require enforceable standards with consequences for non-compliance. Whether through trade agreements, market access conditions, or international treaty mechanisms, effective governance must create costs for violations that outweigh the benefits of non-compliance.
Second, it would require technical standards developed by bodies with the expertise to specify requirements precisely. The UNESCO framework articulates what should be protected without specifying how protection should work technically. Encryption requirements, data minimisation standards, algorithmic auditing protocols, and interoperability specifications would need development through technical bodies capable of translating principles into implementable requirements.
Third, it would require monitoring and verification mechanisms capable of determining whether entities are actually complying with stated requirements. Self-reporting by nations and companies has obvious limitations. Independent verification, whether through international inspection regimes or distributed monitoring approaches, would be necessary to ensure implementation matches commitment.
Fourth, it would require coordination mechanisms that prevent regulatory arbitrage, the practice of structuring activities to take advantage of the most permissive regulatory environment. When neurotechnology companies can locate data processing operations in jurisdictions with minimal requirements, national protections can be effectively circumvented.
The UNESCO framework provides none of these elements directly. It creates no enforcement mechanisms, develops no technical standards, establishes no independent monitoring, and offers no coordination against regulatory arbitrage. It provides principles that nations may implement as they choose, with consequences for non-implementation that remain entirely within national discretion.
This is not UNESCO's fault. The organisation operates within constraints imposed by international politics and member state sovereignty. It cannot compel nations to adopt binding requirements they have not agreed to accept. The framework represents what was achievable through the diplomatic process that produced it.
But recognising these constraints should not lead us to overstate what the framework accomplishes. A voluntary recommendation that relies on national implementation, covering technologies already outpacing regulatory capacity, in a domain where competitive pressures may discourage protective measures, is a starting point at best.
The human mind, that most intimate of domains, is becoming legible to technology at an accelerating pace. UNESCO has said this matters and articulated why. Whether that articulation translates into protection depends on decisions that will be made elsewhere: in national parliaments, corporate boardrooms, regulatory agencies, and, increasingly, in the algorithms that process neural data in ways no framework yet adequately addresses.
The framework is not nothing. It is also not enough.
References and Sources
UNESCO. “Ethics of neurotechnology: UNESCO adopts the first global standard in cutting-edge technology.” November 2025. https://www.unesco.org/en/articles/ethics-neurotechnology-unesco-adopts-first-global-standard-cutting-edge-technology
Precedence Research. “Neurotechnology Market Size and Forecast 2025 to 2034.” https://www.precedenceresearch.com/neurotechnology-market
STAT News. “Brain-computer implants are coming of age. Here are 3 trends to watch in 2026.” December 2025. https://www.statnews.com/2025/12/26/brain-computer-interface-technology-trends-2026/
MIT Technology Review. “Brain-computer interfaces face a critical test.” April 2025. https://www.technologyreview.com/2025/04/01/114009/brain-computer-interfaces-10-breakthrough-technologies-2025/
STAT News. “Data privacy needed for your brain, Neurorights Foundation says.” April 2024. https://www.statnews.com/2024/04/17/neural-data-privacy-emotiv-eeg-muse-headband-neurorights/
African Union & Centre for Future Generations. “Neurotech Consumer Market Atlas.” 2025. https://cfg.eu/neurotech-market-atlas/
UNESCO. “Ethics of neurotechnology.” https://www.unesco.org/en/ethics-neurotech
Magee, Patrick, Marcello Ienca, and Nita Farahany. “Beyond Neural Data: Cognitive Biometrics and Mental Privacy.” Neuron, September 2024. https://www.cell.com/neuron/fulltext/S0896-6273(24)00652-4
UNESCO. “Recommendation on the Ethics of Artificial Intelligence.” 2021. https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence
UNESCO. “First report on the implementation of the 2021 Recommendation on the Ethics of Artificial Intelligence.” 2024. https://unesdoc.unesco.org/ark:/48223/pf0000391341
Oxford Academic. “Neural personal information and its legal protection: evidence from China.” Journal of Law and the Biosciences, 2025. https://academic.oup.com/jlb/article/12/1/lsaf006/8113730
National Science Review. “China's new ethical guidelines for the use of brain–computer interfaces.” 2024. https://academic.oup.com/nsr/article/11/4/nwae154/7668215
Cooley LLP. “Wave of State Legislation Targets Mental Privacy and Neural Data.” May 2025. https://www.cooley.com/news/insight/2025/2025-05-13-wave-of-state-legislation-targets-mental-privacy-and-neural-data
Davis Wright Tremaine. “U.S. Senators Propose 'MIND Act' to Study and Recommend National Standards for Protecting Consumers' Neural Data.” October 2025. https://www.dwt.com/blogs/privacy--security-law-blog/2025/10/senate-mind-act-neural-data-ftc-regulation
Chilean Supreme Court. Rol N 1.080–2020 (Girardi Lavin v. Emotiv Inc.). August 9, 2023.
Frontiers in Psychology. “Chilean Supreme Court ruling on the protection of brain activity: neurorights, personal data protection, and neurodata.” 2024. https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2024.1330439/full
Future of Privacy Forum. “Privacy and the Rise of 'Neurorights' in Latin America.” 2024. https://fpf.org/blog/privacy-and-the-rise-of-neurorights-in-latin-america/
PMC. “Correcting the Brain? The Convergence of Neuroscience, Neurotechnology, Psychiatry, and Artificial Intelligence.” Science and Engineering Ethics, 2020. https://pmc.ncbi.nlm.nih.gov/articles/PMC7550307/
The Conversation. “Neurotechnology is becoming widespread in workplaces – and our brain data needs to be protected.” 2024. https://theconversation.com/neurotechnology-is-becoming-widespread-in-workplaces-and-our-brain-data-needs-to-be-protected-236800
Frontiers in Human Dynamics. “The challenge of wearable neurodevices for workplace monitoring: an EU legal perspective.” 2024. https://www.frontiersin.org/journals/human-dynamics/articles/10.3389/fhumd.2024.1473893/full
ETH Zurich. “We must expand human rights to cover neurotechnology.” News, October 2021. https://ethz.ch/en/news-and-events/eth-news/news/2021/10/marcello-ienca-we-must-expand-human-rights-to-cover-neurotechnology.html
UNESCO Courier. “Rafael Yuste: Let's act before it's too late.” 2022. https://en.unesco.org/courier/2022-1/rafael-yuste-lets-act-its-too-late

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk