Dismantling the GDPR: 151 Million Euros of Corporate Lobbying

For the better part of a decade, Brussels was the city that Big Tech feared. The General Data Protection Regulation, adopted in 2016 and enforced from 2018, became the gold standard for privacy law worldwide, inspiring more than 150 countries to craft their own versions. The AI Act, finalised in 2024, was the planet's first comprehensive attempt to regulate artificial intelligence by risk category. Together, these two landmark laws positioned the European Union as the undisputed global standard-bearer for rights-based digital governance, a regulatory superpower wielding what scholars call the “Brussels Effect” to shape corporate behaviour far beyond its borders.
That era may be ending. On 19 November 2025, the European Commission published its Digital Omnibus Package, a sweeping legislative proposal that amends the GDPR, the ePrivacy Directive, the AI Act, the Data Act, the Data Governance Act, and the NIS2 Directive in a single stroke. Framed as a necessary exercise in “simplification” and “competitiveness,” the package has drawn fierce opposition from an extraordinary coalition of civil society organisations, data protection authorities, privacy advocates, and digital rights groups who see it as something altogether different: a systematic dismantling of the very protections that made European digital law the envy of democracies everywhere.
Amnesty International has called it a threat to produce “the biggest rollback of digital fundamental rights in EU history.” European Digital Rights (EDRi), the continent's leading digital rights network, has labelled the proposals “a major rollback of EU digital protections.” A coalition of 127 civil society organisations, trade unions, and public interest defenders has issued an open letter demanding the Commission halt the Digital Omnibus entirely. And Corporate Europe Observatory, working alongside LobbyControl, has published a granular, article-by-article analysis tracing many of the most consequential changes directly to lobbying documents submitted by Google, Meta, Microsoft, and their trade associations.
The question is no longer whether Europe's digital rights framework is under pressure. It is whether rights-based AI governance can survive anywhere if the jurisdiction that invented it decides the cost of leadership is too high.
The Competitiveness Argument and the Draghi Shadow
To understand the Digital Omnibus, you first need to understand the political climate that produced it. The European Commission did not wake up one morning and decide to rewrite its own landmark legislation on a whim. The proposals emerged from a sustained campaign, years in the making, to reframe European regulation as an obstacle to economic growth rather than a democratic achievement worth preserving.
The intellectual foundation was laid in September 2024, when Mario Draghi, the former president of the European Central Bank and former Italian prime minister, delivered his landmark report on the future of European competitiveness. Commissioned by European Commission President Ursula von der Leyen, the Draghi Report warned that “excessive regulatory and administrative burden can hinder the ease of doing business in the EU and the competitiveness of EU companies.” It singled out the GDPR by name, claiming the regulation had “raised the cost of data by about 20 percent for EU firms compared with US peers.” It pointed to “unclear overlaps” between the GDPR and the AI Act as a specific drag on innovation.
The Draghi Report called for “a radical simplification of GDPR,” harmonised AI sandbox regimes across all member states, and the appointment of a new Vice-President for Simplification to coordinate the process. Within months, the Commission had announced the Digital Omnibus as its primary vehicle for delivering on those recommendations. The speed was notable. What had been discussed as a measured, evidence-based review of the EU's digital rulebook became an accelerated legislative push, outpacing the Commission's own planned “Digital Fitness Check” that was originally scheduled for 2026.
The Commission projects that the package, if adopted as proposed, would save businesses and public administrations at least six billion euros by the end of 2029. The stated goals are to reduce duplicative compliance costs, lighten the regulatory load on small and medium-sized enterprises (SMEs), improve legal certainty, and make the EU's digital rulebook “easier to navigate.”
These are not trivial ambitions. European businesses, particularly smaller ones, have legitimate complaints about regulatory complexity. The GDPR, the AI Act, the Data Act, the Digital Services Act, the Digital Markets Act, and the ePrivacy Directive collectively create a dense web of overlapping obligations that can be genuinely difficult and expensive to navigate. The Commission's Omnibus IV Simplification Package, published separately in May 2025, addressed some of the most straightforward concerns, exempting small and micro companies from the obligation to maintain records of processing activities under the GDPR.
But the Digital Omnibus goes far beyond tidying up paperwork. Critics argue it uses the language of simplification to smuggle in substantive deregulation, weakening core protections in ways that have nothing to do with reducing administrative burdens and everything to do with accommodating the commercial priorities of the largest technology companies on earth.
What the Omnibus Actually Changes
The specific amendments proposed in the Digital Omnibus are extensive, spanning hundreds of pages of legislative text. Several stand out for their potential impact on the rights of hundreds of millions of European citizens.
Perhaps the most technically significant change concerns the very definition of personal data. The Commission proposes to narrow this definition by codifying what it calls a “relative” concept: information qualifies as personal data only if the current holder can identify the data subject using means “reasonably available” to it. The ability of a subsequent recipient to identify the person does not make the data personal for the current holder. This sounds like a minor clarification. It is not. The European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS), in their Joint Opinion 2/2026 published in February 2026, warned that this change “goes far beyond a targeted modification of the GDPR” or “a mere codification of CJEU jurisprudence,” and would “significantly narrow the concept of personal data.” They urged co-legislators not to adopt it.
The implications are enormous. A narrower definition of personal data means less data falls under the GDPR's protection regime. Companies processing information that they argue they cannot use to identify individuals, even if that identification becomes possible in another context or with additional resources, would face fewer restrictions on how they collect, store, and monetise that information. For companies training AI models on vast datasets scraped from the internet, this is precisely the kind of legal breathing room they have been seeking for years.
The second major change creates an explicit legal basis for using personal data to train AI systems. The proposed new Article 88c of the GDPR would establish that processing personal data for the development and operation of AI systems or AI models qualifies as a “legitimate interest” under Article 6(1)(f) of the GDPR. This means companies would no longer need to obtain consent to use personal data for AI training, provided they can demonstrate the processing is necessary, proportionate, and not overridden by the interests of data subjects. Data subjects would retain an unconditional right to object, and companies would need to apply data minimisation measures, but the burden of proof effectively shifts. Rather than asking permission, companies train first and handle objections later.
The EDPB itself noted, somewhat dryly, that this provision is “unnecessary” because the Board had already published guidance confirming that legitimate interest could, in appropriate circumstances, serve as a lawful basis for AI training. The difference, of course, is between regulatory guidance that preserves the balancing test and a statutory provision that tilts the scales toward commercial use.
Third, the Omnibus restructures the relationship between the ePrivacy Directive and the GDPR in ways that affect every internet user. Rules governing access to terminal equipment, including cookies and tracking technologies, are moved from the ePrivacy Directive to the GDPR where personal data is processed. The ePrivacy Directive would no longer govern personal data processing; the GDPR alone would apply. The proposals expand the circumstances under which data can be stored on or accessed from a user's device without consent, including for “aggregated audience measuring” and device security. While the Commission frames these changes as addressing “cookie consent fatigue” (introducing requirements for single-click refusal, six-month moratoriums on repeat consent requests, and machine-readable preference signalling through browsers), civil society groups warn that weakening the ePrivacy framework removes one of the few clear rules preventing companies and governments from constantly tracking what people do on their devices, their cars, and their smart home systems.
Fourth, on the AI Act side, the Omnibus proposes to delay the implementation of rules for high-risk AI systems, which were originally due to take effect in August 2026. The new timeline allows a maximum 16-month extension, with backstop compliance dates of 2 December 2027 and 2 August 2028 depending on the category of high-risk system. The rationale is that the Commission wants to ensure “adequate compliance support” is available before obligations kick in. Critics see a straightforward concession to industry: more time to deploy AI systems without the guardrails that the AI Act was specifically designed to impose. In practical terms, it means that AI systems used in hiring, credit scoring, law enforcement, and migration management will operate for years longer without the mandatory risk assessments and transparency requirements that were supposed to protect people from algorithmic harm.
The Omnibus also introduces a new provision permitting the processing of special categories of personal data (including biometric data, data revealing racial or ethnic origin, and health data) for bias detection and correction in high-risk AI systems. While bias detection is a legitimate and important goal, civil society organisations have raised concerns about creating explicit statutory routes for processing the most sensitive categories of personal data in AI contexts, arguing it could be exploited well beyond its stated purpose.
Finally, the breach notification framework is softened. The timeframe for notifying data protection authorities of personal data breaches is extended from 72 hours to 96 hours, and only breaches likely to result in “high risk” to data subjects would require notification. This is the kind of change that, in isolation, might seem reasonable. Taken alongside everything else, it forms part of a pattern: a consistent loosening of obligations that, cumulatively, transforms the character of the entire regulatory regime.
Following the Money, Article by Article
If the Digital Omnibus were purely a good-faith attempt at regulatory streamlining, its provisions would be expected to reflect the concerns of the broadest possible range of stakeholders: businesses of all sizes, civil society, data protection authorities, consumers, and affected communities. What Corporate Europe Observatory and LobbyControl found, in their analysis published in January 2026, tells a different story.
Their article-by-article comparison of the Digital Omnibus proposals with lobbying documents submitted by Google, Meta, Microsoft, and major technology trade associations reveals what they describe as a close alignment between the Commission's text and Big Tech's longstanding policy demands. The narrowing of the personal data definition, the legitimate interest basis for AI training, the weakening of ePrivacy protections, the delays to high-risk AI obligations: each of these changes corresponds to specific asks documented in corporate lobbying materials.
One particularly striking example involves Google. In a lobbying paper dated 16 August 2025, directed at the German government, Google called for the introduction of a “disproportionate efforts” exemption to compliance. This language subsequently appeared in the Omnibus proposals, which require companies to remove personal data from AI systems only if doing so does not require “disproportionate efforts,” a term that remains undefined and, critics argue, open to systematic abuse by the very companies with the deepest pockets and most sophisticated legal teams.
Documents obtained by Corporate Europe Observatory also show that Google and Microsoft conducted a concerted and successful lobbying effort to remove “large-scale, illegal discrimination” from the list of systemic risks in the AI Code of Practice, a voluntary framework that was meant to guide responsible AI deployment even before the AI Act's binding provisions took effect.
The scale of the lobbying operation is staggering. According to Corporate Europe Observatory's research, published in October 2025, the technology industry's spending on EU lobbying reached a record 151 million euros, with just ten companies accounting for 49 million euros of that total. Meta led the pack at 10 million euros, followed by Microsoft, Apple, and Amazon at 7 million euros each, and Google and Qualcomm at 4.5 million euros each. In the first half of 2025 alone, Big Tech companies held 146 meetings with high-level European Commission staff, an average of more than one meeting for every working day. Amazon logged 43 meetings, Microsoft 36, Google 35, Apple 29, and Meta 27.
The revolving door between industry and the institutions meant to regulate it adds another layer of concern. In February 2026, MEP Aura Salla of the European People's Party was appointed as the European Parliament's rapporteur for the Digital Omnibus. Salla served as Meta's Public Policy Director and Head of EU Affairs from May 2020 to April 2023. Seven civil society watchdog organisations, including Transparency International EU, Corporate Europe Observatory, and The Good Lobby, called for the withdrawal of her appointment, noting that she had failed to declare her previous work at Meta as a potential conflict of interest in her formal declaration of awareness, as required by Article 3 of the Code of Conduct. She had also met with her former employer multiple times since taking office, including lobby meetings in September 2024 and January 2025. Separately, in April 2025, Salla sold stocks in a defence company following reporting by Follow The Money, stocks she had never reported in her declaration of private interests.
Death by a Thousand Cuts
The privacy advocacy organisation noyb, founded by the Austrian lawyer and activist Max Schrems, has described the Digital Omnibus as “death by a thousand cuts” for the GDPR. The characterisation captures something important about the strategy at work. No single amendment in the package is necessarily fatal to the European data protection framework. Each can be individually rationalised. Taken together, they represent a fundamental reorientation of the relationship between citizens and the companies that harvest their data.
Noyb has been particularly critical of the procedural dimension. Rather than following through on the originally planned “Digital Fitness Check” scheduled for 2026, which would have involved systematic evidence gathering and impact assessment, the Commission pushed through the Omnibus in what noyb describes as a “fast track” procedure, bypassing the normal consultative process. The Commission followed what civil society groups characterise as a procedure with legislative shortcuts that circumvented democratic scrutiny, sidelining concerns from organisations acting in the public interest. The result, noyb argues, is a set of proposals that massively lower protections for Europeans while providing “basically no real benefit for average European small and medium businesses.” The changes, in noyb's analysis, are “a gift to US big tech” that open up numerous new loopholes.
A noyb-conducted survey of data protection professionals reinforced this critique, revealing what noyb described as “an enormous gap between the needs of real people working on compliance every day and the problems pushed by the Brussels lobby bubble.” Compliance professionals, it turned out, wanted less paperwork, not fewer rights. The Commission's proposals delivered the opposite: they reduced substantive protections while doing relatively little to simplify the administrative burden that actual practitioners find most burdensome.
The EDPB and EDPS, in their Joint Opinion, echoed many of these concerns while maintaining a more measured tone. They expressed support for certain specific proposals, including the extension of breach notification timelines and targeted changes to data protection impact assessment requirements. But on the most consequential amendments, including the narrowing of the personal data definition and the restructuring of lawful bases for AI training, they raised serious objections. Their overall assessment was that the proposals “may adversely affect the level of protection enjoyed by individuals, create legal uncertainty, and make data protection law more difficult to apply.” Coming from the EU's own data protection authorities, this was a remarkable intervention, a polite but unmistakable warning that the Commission's own watchdogs considered its proposals harmful.
The leaked drafts of the Omnibus generated strong opposition in the European Parliament, particularly from the Social Democrats (S&D), Renew Europe, and the Greens. But the political dynamics are complex. The European People's Party, the largest group in Parliament, has broadly supported the Commission's competitiveness agenda, and the appointment of Aura Salla as rapporteur signals the direction of travel in the Parliament's Industry, Research and Energy (ITRE) committee.
The Global Ripple Effect
The implications of the Digital Omnibus extend far beyond Europe's borders. The GDPR's influence on global privacy regulation has been one of the most consequential developments in international law over the past decade. More than 150 countries have adopted domestic privacy laws that resemble the GDPR in some form, drawn by the regulation's extraterritorial reach and by the mechanism of “adequacy decisions,” through which the European Commission certifies that a third country's data protection framework provides sufficient protection to allow data transfers from the EU. Countries seeking adequacy status have had powerful incentives to align their domestic laws with European standards. If those European standards are weakened, the entire global architecture shifts.
The timing is particularly significant. The United States, under the Trump administration's December 2025 executive order, has moved toward what it describes as a “minimally burdensome national standard for AI policy,” explicitly seeking to limit state-level regulatory divergence and create a more permissive environment for AI development. Three new US comprehensive privacy laws, in Indiana, Kentucky, and Rhode Island, transitioned from planning to enforcement on 1 January 2026, but these state-level efforts exist in a federal vacuum that the executive order is designed to fill with minimal regulatory ambition. The United Kingdom, having departed the EU, enacted its Data Use and Access Act (DUAA) in June 2025, which expands the circumstances for automated decision-making, broadens the definition of “scientific research” to include commercial research, and allows broader consent mechanisms for data processing, with many provisions coming into force in early 2026. Both the US and UK approaches prioritise innovation and economic growth over the precautionary, rights-based model that has defined European regulation.
If Europe now follows the same trajectory, converging toward a lighter-touch regime in the name of competitiveness, the question becomes: who is left to champion rights-based governance?
One potential answer comes from the Global South. India hosted the AI Impact Summit in February 2026, the first time this global governance forum was held outside the developed world. Ninety-one countries and international organisations adopted the AI Impact Summit Declaration, which notably shifted the framing from “risk” (the language of previous summits in Bletchley, Seoul, and Paris) to “impact.” India's IndiaAI mission has deployed a national “common compute” pool of more than 34,000 publicly funded GPUs, seeking to democratise access to AI infrastructure for startups, researchers, and public sector innovators. The United Nations has opened a consultation on AI governance with an April 2026 deadline, seeking input that could shape a global framework.
But the capacity of Global South nations to fill a governance vacuum left by Europe is constrained by the same structural inequalities that shape the AI landscape itself: limited compute infrastructure, dependence on Western and Chinese platforms, and the persistent influence of adequacy mechanisms that tie data flows to European standards, even as those standards erode. Success in addressing AI governance from the Global South depends on three critical issues, as analysts at the Brookings Institution have noted: infrastructure access, governance influence, and local adaptation. Countries lacking compute capacity, energy grids, and connectivity cannot build their own models or process their own data domestically, leaving them reliant on the very corporations whose influence the GDPR was designed to check.
As the Information Technology and Innovation Foundation has argued (from a position sympathetic to deregulation), the Brussels Effect can constrain Global South innovation by imposing compliance costs on countries that lack the institutional capacity to bear them. The irony is that weakening GDPR standards might simultaneously reduce the compliance burden and remove the normative floor that gave smaller nations a template for protecting their citizens' rights. It is a double bind with no easy resolution.
The Deeper Question of Durability
What the Digital Omnibus reveals is not simply a policy debate about the optimal balance between privacy and innovation. It exposes a structural vulnerability in rights-based governance itself. Digital rights frameworks are politically expensive to create and politically cheap to dismantle. The GDPR took years of negotiation, involved thousands of stakeholders, and required sustained political will to overcome industry opposition. The AI Act endured an even more fraught legislative process, with real-time lobbying battles over the regulation of foundation models, biometric surveillance, and high-risk applications.
Dismantling these protections requires no comparable effort. A single omnibus proposal, framed in the anodyne language of “simplification” and “competitiveness,” can undo years of democratic deliberation in a legislative session. The asymmetry is inherent: concentrated corporate interests can sustain lobbying pressure indefinitely, while the diffuse public interest in privacy and algorithmic accountability lacks a permanent, well-funded constituency to defend it. Big Tech companies are spending as much as 550 billion US dollars in 2026 to dominate the AI market, according to Corporate Europe Observatory's estimates. Against that scale of capital deployment, the resources available to civil society watchdogs are negligible.
This dynamic is compounded by the geopolitical pressure that European policymakers face. The AI race between the United States and China is often framed as an existential competition in which regulatory overhead is a strategic disadvantage. The Draghi Report explicitly invoked this framing, and Commission President von der Leyen has repeatedly emphasised the need for Europe to “keep pace” with its geopolitical rivals. In this environment, rights-based regulation is perpetually on the defensive, required to justify its existence in economic terms rather than being valued as a democratic achievement in its own right.
Amnesty International's April 2026 analysis connects the Digital Omnibus to a broader pattern of democratic backsliding on digital rights. The organisation's research has documented how platform algorithms contributed to ethnic cleansing against Rohingya Muslims in Myanmar and grave human rights abuses against Tigrayan people in Ethiopia, with Meta failing to moderate, and in some instances actively amplifying, harmful and discriminatory content. The weakening of the DSA and DMA, which have also been mentioned as potential targets for simplification, would reduce the already limited tools available to hold platforms accountable for these harms. EDRi has warned that this deregulatory political moment is likely to spill over into upcoming legislation, including the Digital Fairness Act expected later in 2026, a law meant to modernise consumer protection for the digital age and tackle manipulative design practices.
The appointment of Aura Salla as rapporteur, the record lobbying expenditures, the secretive meetings between Commission officials and industry representatives (documented by Corporate Europe Observatory in a November 2025 report on the Commission's pre-proposal consultations), the fast-tracking of legislation without proper impact assessment: these are not aberrations in an otherwise healthy democratic process. They are symptoms of a regulatory capture that civil society organisations have been warning about for years.
Where This Leaves Us
The Digital Omnibus is still moving through the ordinary legislative procedure. The European Parliament and the Council must both approve the proposals before they become law, and adoption is not expected before mid-to-late 2026 at the earliest. There is still time for amendments, and the opposition from data protection authorities, civil society, and significant parliamentary blocs suggests the final text may differ substantially from the Commission's proposal.
But the direction of travel is clear. Even if the most controversial provisions are modified or removed, the political consensus that produced the GDPR and the AI Act has fractured. The forces pushing for deregulation, supercharged by record lobbying spending, a sympathetic Commission leadership, and a geopolitical environment that privileges speed over safety, are not going away. The 127 civil society organisations that signed the open letter demanding the Commission halt the Omnibus are fighting a defensive battle, and they know it.
The consequences extend beyond any single piece of legislation. If Europe retreats from its position as the global standard-bearer for digital rights, the vacuum will not remain empty. It will be filled by regulatory models that prioritise corporate freedom over individual protection, by voluntary industry codes that lack enforcement mechanisms, and by a fragmented global landscape in which the most powerful technology companies operate with minimal democratic oversight. The “Brussels Effect” works in reverse, too: when the standard-setter lowers its standards, the floor drops for everyone.
What is at stake in the Digital Omnibus is not merely the future of European data protection. It is whether democratic societies possess the institutional resilience to maintain rights-based governance of powerful technologies in the face of sustained commercial pressure. The evidence so far is not encouraging. But the fight is not over, and its outcome will shape digital governance for a generation.
References and Sources
European Commission, “Digital Package: Simplification of EU Digital Rules,” published 19 November 2025. Available at: https://digital-strategy.ec.europa.eu/en/faqs/digital-package
Amnesty International, “EU Simplification: Throwing Human Rights Under the Omnibus,” published 19 November 2025. Available at: https://www.amnesty.org/en/latest/news/2025/11/eu-simplification-throwing-human-rights-under-the-omnibus/
Amnesty International, “EU: Digital Omnibus Proposals Will Tear Apart Accountability on Digital Rights,” published November 2025. Available at: https://www.amnesty.org/en/latest/news/2025/11/eu-digital-omnibus-proposals-will-tear-apart-accountability-on-digital-rights/
Amnesty International, “How EU Proposals to 'Simplify' Tech Laws Will Roll Back Our Rights,” published April 2026. Available at: https://www.amnesty.org/en/latest/news/2026/04/eu-simplification-laws/
Corporate Europe Observatory and LobbyControl, “Article by Article, How Big Tech Shaped the EU's Roll-back of Digital Rights,” published 14 January 2026. Available at: https://corporateeurope.org/en/2026/01/article-article-how-big-tech-shaped-eus-roll-back-digital-rights
Corporate Europe Observatory, “Revealed: Tech Industry Now Spending Record 151 Million Euros on Lobbying the EU,” published October 2025. Available at: https://corporateeurope.org/en/2025/10/revealed-tech-industry-now-spending-record-eu151-million-lobbying-eu
Corporate Europe Observatory, “Preparing a Roll-back of Digital Rights: Commission's Secretive Meetings with Industry,” published November 2025. Available at: https://corporateeurope.org/en/2025/11/preparing-roll-back-digital-rights-commissions-secretive-meetings-industry
European Digital Rights (EDRi), “Commission's Digital Omnibus is a Major Rollback of EU Digital Protections,” published 2025. Available at: https://edri.org/our-work/commissions-digital-omnibus-is-a-major-rollback-of-eu-digital-protections/
EDRi, “Forthcoming Digital Omnibus Would Mark Point of No Return,” published 2025. Available at: https://edri.org/our-work/forthcoming-digital-omnibus-would-mark-point-of-no-return/
EDPB and EDPS, “Joint Opinion 2/2026 on the Proposal for a Regulation (Digital Omnibus),” published February 2026. Available at: https://www.edpb.europa.eu/system/files/2026-02/edpb_edps_jointopinion_202602_digitalomnibus_en.pdf
noyb, “Digital Omnibus: EU Commission Wants to Wreck Core GDPR Principles,” published 2025. Available at: https://noyb.eu/en/digital-omnibus-eu-commission-wants-wreck-core-gdpr-principles
noyb, “Open Letter: Digital Omnibus Brings Deregulation, Not Simplification,” published 2025. Available at: https://noyb.eu/en/open-letter-digital-omnibus-brings-deregulation-not-simplification
People vs Big Tech, “'Stop the Digital Omnibus,' Say 127 Civil Society Organisations,” published 2025. Available at: https://peoplevsbig.tech/the-eu-must-uphold-hard-won-protections-for-digital-human-rights/
Mario Draghi, “The Future of European Competitiveness” (Draghi Report), commissioned by European Commission President Ursula von der Leyen, published September 2024. Available at: https://commission.europa.eu/topics/competitiveness/draghi-report_en
European Parliament, “Simplifying EU Digital Laws for Competitiveness,” published November 2025. Available at: https://epthinktank.eu/2025/11/20/simplifying-eu-digital-laws-for-competitiveness/
Transparency International EU, “Call to Withdraw European Parliament's Digital Omnibus Rapporteur Appointment,” published February 2026. Available at: https://transparency.eu/call-to-withdraw-european-parliaments-digital-omnibus-rapporteur-appointment/
Corporate Europe Observatory, “Watchdog Organisations Issue Call to Withdraw Aura Salla's Appointment as Digital Omnibus Rapporteur,” published February 2026. Available at: https://corporateeurope.org/en/2026/02/watchdog-organisations-issue-call-withdraw-aura-sallas-appointment-digital-omnibus
White and Case LLP, “GDPR Under Revision: Key Takeaways from the Digital Omnibus Regulation Proposal,” published 2025. Available at: https://www.whitecase.com/insight-alert/gdpr-under-revision-key-takeaways-from-digital-omnibus-regulation-proposal
IAPP, “EU Digital Omnibus: Analysis of Key Changes,” published 2025. Available at: https://iapp.org/news/a/eu-digital-omnibus-analysis-of-key-changes
Bruegel, “Efficiency and Distribution in the European Union's Digital Deregulation Push,” published 2025. Available at: https://www.bruegel.org/policy-brief/efficiency-and-distribution-european-unions-digital-deregulation-push
ITIF, “How the Brussels Effect Hinders Innovation in the Global South,” published January 2026. Available at: https://itif.org/publications/2026/01/26/how-brussels-effect-hinders-innovation-in-global-south/
The Record from Recorded Future News, “Civil Society Decries Digital Rights 'Rollback' as European Commission Pushes Data Protection Changes,” published 2025. Available at: https://therecord.media/civil-society-privacy-rollback
Brookings Institution, “AI in the Global South: Opportunities and Challenges Towards More Inclusive Governance,” published 2025. Available at: https://www.brookings.edu/articles/ai-in-the-global-south-opportunities-and-challenges-towards-more-inclusive-governance/
EDPB and EDPS, “Digital Omnibus: EDPB and EDPS Support Simplification and Competitiveness While Raising Key Concerns,” published February 2026. Available at: https://www.edpb.europa.eu/news/news/2026/digital-omnibus-edpb-and-edps-support-simplification-and-competitiveness-while_en

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk