The Great Cloud Escape: Why New Rules in the EU Won't Free SMEs
On a grey September morning in Brussels, as the EU Data Act's cloud-switching provisions officially took effect, a peculiar thing happened: nothing. No mass exodus from hyperscalers. No sudden surge of SMEs racing to switch providers. No triumphant declarations of cloud independence. Instead, across Europe's digital economy, millions of small and medium enterprises remained exactly where they were—locked into the same cloud platforms they'd been using, running the same AI workloads, paying the same bills.
The silence was deafening, and it spoke volumes about the gap between regulatory ambition and technical reality.
The European Union had just unleashed what many called the most aggressive cloud portability legislation in history. After years of complaints about vendor lock-in, eye-watering egress fees, and the monopolistic practices of American tech giants, Brussels had finally acted. The Data Act's cloud-switching rules, which came into force on 12 September 2025, promised to liberate European businesses from the iron grip of AWS, Microsoft Azure, and Google Cloud. Hyperscalers would be forced to make switching providers as simple as changing mobile phone operators. Data egress fees—those notorious “hotel California” charges that let you check in but made leaving prohibitively expensive—would be abolished entirely by 2027.
Yet here we are, months into this brave new world of mandated cloud portability, and the revolution hasn't materialised. The hyperscalers, in a masterclass of regulatory jujitsu, had already eliminated egress fees months before the rules took effect—but only for customers who completely abandoned their platforms. Meanwhile, the real barriers to switching remained stubbornly intact: proprietary APIs that wouldn't translate, AI models trained on NVIDIA's CUDA that couldn't run anywhere else, and contractual quicksand that made leaving technically possible but economically suicidal.
For Europe's six million SMEs, particularly those betting their futures on artificial intelligence, the promise of cloud freedom has collided with a harsh reality: you can legislate away egress fees, but you can't regulate away the fundamental physics of vendor lock-in. And nowhere is this more apparent than in the realm of AI workloads, where the technical dependencies run so deep that switching providers isn't just expensive—it's often impossible.
The Brussels Bombshell
To understand why the EU Data Act's cloud provisions represent both a watershed moment and a potential disappointment, you need to grasp the scale of ambition behind them. This wasn't just another piece of tech regulation from Brussels—it was a frontal assault on the business model that had made American cloud providers the most valuable companies on Earth.
The numbers tell the story of why Europe felt compelled to act. By 2024, AWS and Microsoft Azure each controlled nearly 40 per cent of the European cloud market, with Google claiming another 12 per cent. Together, these three American companies held over 90 per cent of Europe's cloud infrastructure—a level of market concentration that would have been unthinkable in any other strategic industry. For comparison, imagine if 90 per cent of Europe's electricity or telecommunications infrastructure was controlled by three American companies.
The dependency went deeper than market share. By 2024, European businesses were spending over €50 billion annually on cloud services, with that figure growing at 20 per cent year-on-year. Every startup, every digital transformation initiative, every AI experiment was being built on American infrastructure, using American tools, generating American profits. For a continent that prided itself on regulatory sovereignty and had already taken on Big Tech with GDPR, this was an intolerable situation.
The Data Act's cloud provisions, buried in Articles 23 through 31 of the regulation, were surgical in their precision. They mandated that cloud providers must remove all “pre-commercial, commercial, technical, contractual, and organisational” barriers to switching. Customers would have the right to switch providers with just two months' notice, and the actual transition had to be completed within 30 days. Providers would be required to offer open, documented APIs and support data export in “structured, commonly used, and machine-readable formats.”
Most dramatically, the Act set a ticking clock on egress fees. During a transition period lasting until January 2027, providers could charge only their actual costs for assisting with switches. After that date, all switching charges—including the infamous data egress fees—would be completely prohibited, with only narrow exceptions for ongoing multi-cloud deployments.
The penalties for non-compliance were vintage Brussels: up to 4 per cent of global annual turnover, the same nuclear option that had given GDPR its teeth. For companies like Amazon and Microsoft, each generating over $200 billion in annual revenue, that meant potential fines measured in billions of euros.
On paper, it was a masterpiece of market intervention. The EU had identified a clear market failure—vendor lock-in was preventing competition and innovation—and had crafted rules to address it. Cloud switching would become as frictionless as switching mobile operators or banks. European SMEs would be free to shop around, driving competition, innovation, and lower prices.
But regulations written in Brussels meeting rooms rarely survive contact with the messy reality of enterprise IT. And nowhere was this gap between theory and practice wider than in the hyperscalers' response to the new rules.
The Hyperscaler Gambit
In January 2024, eight months before the Data Act's cloud provisions would take effect, Google Cloud fired the first shot in what would become a fascinating game of regulatory chess. The company announced it was eliminating all data egress fees for customers leaving its platform—not in 2027 as the EU required, but immediately.
“We believe in customer choice, including the choice to move your data out of Google Cloud,” the announcement read, wrapped in the language of customer empowerment. Within weeks, AWS and Microsoft Azure had followed suit, each proclaiming their commitment to cloud portability and customer freedom.
To casual observers, it looked like the EU had won before the fight even began. The hyperscalers were capitulating, eliminating egress fees years ahead of schedule. European regulators claimed victory. The tech press hailed a new era of cloud competition.
But dig deeper into these announcements, and a different picture emerges—one of strategic brilliance rather than regulatory surrender.
Take AWS's offer, announced in March 2024. Yes, they would waive egress fees for customers leaving the platform. But the conditions revealed the catch: customers had to completely close their AWS accounts within 60 days, removing all data and terminating all services. There would be no gradual migration, no testing the waters with another provider, no hybrid strategy. It was all or nothing.
Microsoft's Azure took a similar approach but added another twist: customers needed to actively apply for egress fee credits, which would only be applied after they had completely terminated their Azure subscriptions. The process required submitting a formal request, waiting for approval, and completing the entire migration within 60 days.
Google Cloud, despite being first to announce, imposed perhaps the most restrictive conditions. Customers needed explicit approval before beginning their migration, had to close their accounts completely, and faced “additional scrutiny” if they made repeated requests to leave the platform—a provision that seemed designed to prevent customers from using the free egress offer to simply backup their data elsewhere.
These weren't concessions—they were carefully calibrated responses that achieved multiple strategic objectives. First, by eliminating egress fees voluntarily, the hyperscalers could claim they were already compliant with the spirit of the Data Act, potentially heading off more aggressive regulatory intervention. Second, by making the free egress conditional on complete account termination, they ensured that few customers would actually use it. Multi-cloud strategies, hybrid deployments, or gradual migrations—the approaches that most enterprises actually need—remained as expensive as ever.
The numbers bear this out. Despite the elimination of egress fees, cloud switching rates in Europe barely budged in 2024. According to industry analysts, less than 3 per cent of enterprise workloads moved between major cloud providers, roughly the same rate as before the announcements. The hyperscalers had given away something that almost nobody actually wanted—free egress for complete platform abandonment—while keeping their real lock-in mechanisms intact.
But the true genius of the hyperscaler response went beyond these tactical manoeuvres. By focusing public attention on egress fees, they had successfully framed the entire debate around data transfer costs. Missing from the discussion were the dozens of other barriers that made cloud switching virtually impossible for most organisations, particularly those running AI workloads.
The SME Reality Check
To understand why the EU Data Act's promise of cloud portability rings hollow for most SMEs, consider the story of a typical European company trying to navigate the modern cloud landscape. Let's call them TechCo, a 50-person fintech startup based in Amsterdam, though their story could belong to any of the thousands of SMEs across Europe wrestling with similar challenges.
TechCo had built their entire platform on AWS starting in 2021, attracted by generous startup credits and the promise of infinite scalability. By 2024, they were spending €40,000 monthly on cloud services, with their costs growing 30 per cent annually. Their infrastructure included everything from basic compute and storage to sophisticated AI services: SageMaker for machine learning, Comprehend for natural language processing, and Rekognition for identity verification.
When the Data Act's provisions kicked in and egress fees were eliminated, TechCo's CTO saw an opportunity. Azure was offering aggressive pricing for AI workloads, potentially saving them 25 per cent on their annual cloud spend. With egress fees gone, surely switching would be straightforward?
The first reality check came when they audited their infrastructure. Over three years, they had accumulated dependencies on 47 different AWS services. Their application code contained over 10,000 calls to AWS-specific APIs. Their data pipeline relied on AWS Glue for ETL, their authentication used AWS Cognito, their message queuing ran on SQS, and their serverless functions were built on Lambda. Each of these services would need to be replaced, recoded, and retested on Azure equivalents—assuming equivalents even existed.
The AI workloads presented even bigger challenges. Their fraud detection models had been trained using SageMaker, with training data stored in S3 buckets organised in AWS-specific formats. The models themselves were optimised for AWS's instance types and used proprietary SageMaker features for deployment and monitoring. Moving to Azure wouldn't just mean transferring data—it would mean retraining models, rebuilding pipelines, and potentially seeing different results due to variations in how each platform handled machine learning workflows.
Then came the hidden costs that no regulation could address. TechCo's engineering team had spent three years becoming AWS experts. They knew every quirk of EC2 instances, every optimisation trick for DynamoDB, every cost-saving hack for S3 storage. Moving to Azure would mean retraining the entire team, with productivity dropping significantly during the transition. Industry estimates suggested a 40 per cent productivity loss for at least six months—a devastating blow for a startup trying to compete in the fast-moving fintech space.
The contractual landscape added another layer of complexity. TechCo had signed a three-year Enterprise Discount Programme with AWS in 2023, committing to minimum spend levels in exchange for significant discounts. Breaking this agreement would not only forfeit their discounts but potentially trigger penalty clauses. They had also purchased Reserved Instances for their core infrastructure, representing prepaid capacity that couldn't be transferred to another provider.
But perhaps the most insidious lock-in came from their customers. TechCo's enterprise clients had undergone extensive security reviews of their AWS infrastructure, with some requiring specific compliance certifications that were AWS-specific. Moving to Azure would trigger new security assessments that could take months, during which major clients might suspend their contracts.
After six weeks of analysis, TechCo's conclusion was stark: switching to Azure would cost approximately €800,000 in direct migration costs, cause at least €1.2 million in lost productivity, and risk relationships with clients worth €5 million annually. The 25 per cent savings on cloud costs—roughly €120,000 per year—would take over 16 years to pay back the migration investment, assuming nothing went wrong.
TechCo's story isn't unique. Across Europe, SMEs are discovering that egress fees were never the real barrier to cloud switching. The true lock-in comes from a web of technical dependencies, human capital investments, and business relationships that no regulation can easily unpick.
A 2024 survey of European SMEs found that 80 per cent had experienced unexpected costs or budget overruns related to cloud services, with most citing the complexity of migration as their primary reason for staying with incumbent providers. Despite the Data Act's provisions, 73 per cent of SMEs reported feeling “locked in” to their current cloud provider, with only 12 per cent actively considering a switch in the next 12 months.
The situation is particularly acute for companies that have embraced cloud-native architectures. The more deeply integrated a company becomes with their cloud provider's services—using managed databases, serverless functions, and AI services—the harder it becomes to leave. It's a cruel irony: the companies that have most fully embraced the cloud's promise of innovation and agility are also the most trapped by vendor lock-in.
The Hidden Friction
While politicians and regulators focused on egress fees and contract terms, the real barriers to cloud portability were multiplying in the technical layer—a byzantine maze of incompatible APIs, proprietary services, and architectural dependencies that made switching providers functionally impossible for complex workloads.
Consider the fundamental challenge of API incompatibility. AWS offers over 200 distinct services, each with its own API. Azure provides a similarly vast catalogue, as does Google Cloud. But despite performing similar functions, these APIs are utterly incompatible. An application calling AWS's S3 API to store data can't simply point those same calls at Azure Blob Storage. Every single API call—and large applications might have tens of thousands—needs to be rewritten, tested, and optimised for the new platform.
The problem compounds when you consider managed services. AWS's DynamoDB, Azure's Cosmos DB, and Google's Firestore are all NoSQL databases, but they operate on fundamentally different principles. DynamoDB uses a key-value model with specific concepts like partition keys and sort keys. Cosmos DB offers multiple APIs including SQL, MongoDB, and Cassandra compatibility. Firestore structures data as documents in collections. Migrating between them isn't just a matter of moving data—it requires rearchitecting how applications think about data storage and retrieval.
Serverless computing adds another layer of lock-in. AWS Lambda, Azure Functions, and Google Cloud Functions all promise to run code without managing servers, but each has unique triggers, execution environments, and limitations. A Lambda function triggered by an S3 upload event can't be simply copied to Azure—the entire event model is different. Cold start behaviours vary. Timeout limits differ. Memory and CPU allocations work differently. What seems like portable code becomes deeply platform-specific the moment it's deployed.
The networking layer presents its own challenges. Each cloud provider has developed sophisticated networking services—AWS's VPC, Azure's Virtual Network, Google's VPC—that handle routing, security, and connectivity in proprietary ways. Virtual private networks, peering connections, and security groups all need to be completely rebuilt when moving providers. For companies with complex network topologies, especially those with hybrid cloud or on-premises connections, this alone can take months of planning and execution.
Then there's the observability problem. Modern applications generate vast amounts of telemetry data—logs, metrics, traces—that feed into monitoring and alerting systems. AWS CloudWatch, Azure Monitor, and Google Cloud Operations Suite each collect and structure this data differently. Years of accumulated dashboards, alerts, and runbooks become worthless when switching providers. The institutional knowledge embedded in these observability systems—which metrics indicate problems, what thresholds trigger alerts, which patterns precede outages—has to be rebuilt from scratch.
Data gravity adds a particularly pernicious form of lock-in. Once you have petabytes of data in a cloud provider, it becomes the centre of gravity for all your operations. It's not just the cost of moving that data—though that remains significant despite waived egress fees. It's that modern data architectures assume data locality. Analytics tools, machine learning platforms, and data warehouses all perform best when they're close to the data. Moving the data means moving the entire ecosystem built around it.
The skills gap represents perhaps the most underappreciated form of technical lock-in. Cloud platforms aren't just technology stacks—they're entire ecosystems with their own best practices, design patterns, and operational philosophies. An AWS expert thinks in terms of EC2 instances, Auto Scaling groups, and CloudFormation templates. An Azure expert works with Virtual Machines, Virtual Machine Scale Sets, and ARM templates. These aren't just different names for the same concepts—they represent fundamentally different approaches to cloud architecture.
For SMEs, this creates an impossible situation. They typically can't afford to maintain expertise across multiple cloud platforms. They pick one, invest in training their team, and gradually accumulate platform-specific knowledge. Switching providers doesn't just mean moving workloads—it means discarding years of accumulated expertise and starting the learning curve again.
The automation and infrastructure-as-code revolution, ironically, has made lock-in worse rather than better. Tools like Terraform promise cloud-agnostic infrastructure deployment, but in practice, most infrastructure code is highly platform-specific. AWS CloudFormation templates, Azure Resource Manager templates, and Google Cloud Deployment Manager configurations are completely incompatible. Even when using supposedly cloud-agnostic tools, the underlying resource definitions remain platform-specific.
Security and compliance add yet another layer of complexity. Each cloud provider has its own identity and access management system, encryption methods, and compliance certifications. AWS's IAM policies don't translate to Azure's Role-Based Access Control. Key management systems are incompatible. Compliance attestations need to be renewed. For regulated industries, this means months of security reviews and audit processes just to maintain the same security posture on a new platform.
The AI Trap
If traditional cloud workloads are difficult to migrate, AI and machine learning workloads are nearly impossible. The technical dependencies run so deep, the ecosystem lock-in so complete, that switching providers for AI workloads often means starting over from scratch.
The problem starts with CUDA, NVIDIA's proprietary parallel computing platform that has become the de facto standard for AI development. With NVIDIA controlling roughly 90 per cent of the AI GPU market, virtually all major machine learning frameworks—TensorFlow, PyTorch, JAX—are optimised for CUDA. Models trained on NVIDIA GPUs using CUDA simply won't run on other hardware without significant modification or performance degradation.
This creates a cascading lock-in effect. AWS offers NVIDIA GPU instances, as does Azure and Google Cloud. But each provider packages these GPUs differently, with different instance types, networking configurations, and storage options. A model optimised for AWS's p4d.24xlarge instances (with 8 NVIDIA A100 GPUs) won't necessarily perform the same on Azure's StandardND96asrv4 (also with 8 A100s) due to differences in CPU, memory, networking, and system architecture.
The frameworks and tools built on top of these GPUs add another layer of lock-in. AWS SageMaker, Azure Machine Learning, and Google's Vertex AI each provide managed services for training and deploying models. But they're not interchangeable platforms running the same software—they're completely different systems with unique APIs, workflow definitions, and deployment patterns.
Consider what's involved in training a large language model. On AWS, you might use SageMaker's distributed training features, store data in S3, manage experiments with SageMaker Experiments, and deploy with SageMaker Endpoints. The entire workflow is orchestrated using SageMaker Pipelines, with costs optimised using Spot Instances and monitoring through CloudWatch.
Moving this to Azure means rebuilding everything using Azure Machine Learning's completely different paradigm. Data moves to Azure Blob Storage with different access patterns. Distributed training uses Azure's different parallelisation strategies. Experiment tracking uses MLflow instead of SageMaker Experiments. Deployment happens through Azure's online endpoints with different scaling and monitoring mechanisms.
But the real killer is the data pipeline. AI workloads are voraciously data-hungry, often processing terabytes or petabytes of training data. This data needs to be continuously preprocessed, augmented, validated, and fed to training jobs. Each cloud provider has built sophisticated data pipeline services—AWS Glue, Azure Data Factory, Google Dataflow—that are completely incompatible with each other.
A financial services company training fraud detection models might have years of transaction data flowing through AWS Kinesis, processed by Lambda functions, stored in S3, catalogued in Glue, and fed to SageMaker for training. Moving to Azure doesn't just mean copying the data—it means rebuilding the entire pipeline using Event Hubs, Azure Functions, Blob Storage, Data Factory, and Azure Machine Learning. The effort involved is comparable to building the system from scratch.
The model serving infrastructure presents equal challenges. Modern AI applications don't just train models—they serve them at scale, handling millions of inference requests with millisecond latency requirements. Each cloud provider has developed sophisticated serving infrastructures with auto-scaling, A/B testing, and monitoring capabilities. AWS has SageMaker Endpoints, Azure has Managed Online Endpoints, and Google has Vertex AI Predictions. These aren't just different names for the same thing—they're fundamentally different architectures with different performance characteristics, scaling behaviours, and cost models.
Version control and experiment tracking compound the lock-in. Machine learning development is inherently experimental, with data scientists running hundreds or thousands of experiments to find optimal models. Each cloud provider's ML platform maintains this experimental history in proprietary formats. Years of accumulated experiments, with their hyperparameters, metrics, and model artifacts, become trapped in platform-specific systems.
The specialised hardware makes things even worse. As AI models have grown larger, cloud providers have developed custom silicon to accelerate training and inference. Google has its TPUs (Tensor Processing Units), AWS has Inferentia and Trainium chips, and Azure is developing its own AI accelerators. Models optimised for these custom chips achieve dramatic performance improvements but become completely non-portable.
For SMEs trying to compete in AI, this creates an impossible dilemma. They need the sophisticated tools and massive compute resources that only hyperscalers can provide, but using these tools locks them in completely. A startup that builds its AI pipeline on AWS SageMaker is making a essentially irreversible architectural decision. The cost of switching—retraining models, rebuilding pipelines, retooling operations—would likely exceed the company's entire funding.
The numbers tell the story. A 2024 survey of European AI startups found that 94 per cent were locked into a single cloud provider for their AI workloads, with 78 per cent saying switching was “technically impossible” without rebuilding from scratch. The average estimated cost of migrating AI workloads between cloud providers was 3.8 times the annual cloud spend—a prohibitive barrier for companies operating on venture capital runways.
Contract Quicksand
While the EU Data Act addresses some contractual barriers to switching, the reality of cloud contracts remains a minefield of lock-in mechanisms that survive regulatory intervention. These aren't the crude barriers of the past—excessive termination fees or explicit non-portability clauses—but sophisticated commercial arrangements that make switching economically irrational even when technically possible.
The Enterprise Discount Programme (EDP) model, used by all major cloud providers, represents the most pervasive form of contractual lock-in. Under these agreements, customers commit to minimum spend levels—typically over one to three years—in exchange for significant discounts, sometimes up to 50 per cent off list prices. Missing these commitments doesn't just mean losing discounts; it often triggers retroactive repricing, where past usage is rebilled at higher rates.
Consider a typical European SME that signs a €500,000 annual commit with AWS for a 30 per cent discount. Eighteen months in, they discover Azure would be 20 per cent cheaper for their workloads. But switching means not only forgoing the AWS discount but potentially paying back the discount already received—turning a money-saving move into a financial disaster. The Data Act doesn't prohibit these arrangements because they're framed as voluntary commercial agreements rather than switching barriers.
Reserved Instances and Committed Use Discounts add another layer of lock-in. These mechanisms, where customers prepay for cloud capacity, can reduce costs by up to 70 per cent. But they're completely non-transferable between providers. A company with €200,000 in AWS Reserved Instances has essentially prepaid for capacity they can't use elsewhere. The financial hit from abandoning these commitments often exceeds any savings from switching providers.
The credit economy creates its own form of lock-in. Cloud providers aggressively court startups with free credits—AWS Activate offers up to $100,000, Google for Startups provides up to $200,000, and Microsoft for Startups can reach $150,000. These credits come with conditions: they expire if unused, can't be transferred, and often require the startup to showcase their provider relationship. By the time credits expire, startups are deeply embedded in the provider's ecosystem.
Support contracts represent another subtle barrier. Enterprise support from major cloud providers costs tens of thousands annually but provides crucial services: 24/7 technical support, architectural reviews, and direct access to engineering teams. These contracts typically run annually, can't be prorated if cancelled early, and the accumulated knowledge from years of support interactions—documented issues, architectural recommendations, optimization strategies—doesn't transfer to a new provider.
Marketplace commitments lock in customers through third-party software. Many enterprises commit to purchasing software through their cloud provider's marketplace to consolidate billing and count toward spending commitments. But marketplace purchases are provider-specific. A company using Databricks through AWS Marketplace can't simply move that subscription to Azure, even though Databricks runs on both platforms.
The professional services trap affects companies that use cloud providers' consulting arms for implementation. When AWS Professional Services or Microsoft Consulting Services builds a solution, they naturally use their platform's most sophisticated (and proprietary) services. The resulting architectures are so deeply platform-specific that moving to another provider means not just migration but complete re-architecture.
Service Level Agreements create switching friction through credits rather than penalties. When cloud providers fail to meet uptime commitments, they issue service credits rather than refunds. These credits accumulate over time, representing value that's lost if the customer switches providers. A company with €50,000 in accumulated credits faces a real cost to switching that no regulation addresses.
Bundle pricing makes cost comparison nearly impossible. Cloud providers increasingly bundle services—compute, storage, networking, AI services—into package deals that obscure individual service costs. A company might know they're spending €100,000 annually with AWS but have no clear way to compare that to Azure's pricing without months of detailed analysis and proof-of-concept work.
Auto-renewal clauses, while seemingly benign, create switching windows that are easy to miss. Many enterprise agreements auto-renew unless cancelled with specific notice periods, often 90 days before renewal. Miss the window, and you're locked in for another year. The Data Act requires reasonable notice periods but doesn't prohibit auto-renewal itself.
The Market Reality
As the dust settles on the Data Act's implementation, the European cloud market presents a paradox: regulations designed to increase competition have, in many ways, entrenched the dominance of existing players while creating new forms of market distortion.
The immediate winners are, surprisingly, the hyperscalers themselves. By eliminating egress fees ahead of regulatory requirements, they've positioned themselves as customer-friendly innovators rather than monopolistic gatekeepers. Their stock prices, far from suffering under regulatory pressure, have continued to climb, with cloud divisions driving record profits. AWS revenues grew 19 per cent year-over-year in 2024, Azure grew 30 per cent, and Google Cloud grew 35 per cent—hardly the numbers of companies under existential regulatory threat.
The elimination of egress fees has had an unexpected consequence: it's made multi-cloud strategies more expensive, not less. Since free egress only applies when completely leaving a provider, companies maintaining presence across multiple clouds still pay full egress rates for ongoing data transfers. This has actually discouraged the multi-cloud approaches that regulators hoped to encourage.
European cloud providers, who were supposed to benefit from increased competition, find themselves in a difficult position. Companies like OVHcloud, Scaleway, and Hetzner had hoped the Data Act would level the playing field. Instead, they're facing new compliance costs without the scale to absorb them. The requirement to provide sophisticated switching tools, maintain compatibility APIs, and ensure data portability represents a proportionally higher burden for smaller providers.
The consulting industry has emerged as an unexpected beneficiary. The complexity of cloud switching, even with regulatory support, has created a booming market for migration consultants, cloud architects, and multi-cloud specialists. Global consulting firms are reporting 40 per cent year-over-year growth in cloud migration practices, with day rates for cloud migration specialists reaching €2,000 in major European cities.
Software vendors selling cloud abstraction layers and multi-cloud management tools have seen explosive growth. Companies like HashiCorp, whose Terraform tool promises infrastructure-as-code portability, have seen their valuations soar. But these tools, while helpful, add their own layer of complexity and cost, often negating the savings that switching providers might deliver.
The venture capital ecosystem has adapted in unexpected ways. VCs now explicitly factor in cloud lock-in when evaluating startups, with some requiring portfolio companies to maintain cloud-agnostic architectures from day one. This has led to over-engineering in early-stage startups, with companies spending precious capital on portability they may never need instead of focusing on product-market fit.
Large enterprises with dedicated cloud teams have benefited most from the new regulations. They have the resources to negotiate better terms, the expertise to navigate complex migrations, and the leverage to extract concessions from providers. But this has widened the gap between large companies and SMEs, contrary to the regulation's intent of democratising cloud access.
The standardisation efforts mandated by the Data Act have proceeded slowly. The requirement for “structured, commonly used, and machine-readable formats” sounds straightforward, but defining these standards across hundreds of cloud services has proved nearly impossible. Industry bodies are years away from meaningful standards, and even then, adoption will be voluntary in practice if not in law.
Market concentration has actually increased in some segments. The complexity of compliance has driven smaller, specialised cloud providers to either exit the market or sell to larger players. The number of independent European cloud providers has decreased by 15 per cent since the Data Act was announced, with most citing regulatory complexity as a factor in their decision.
Innovation has shifted rather than accelerated. Cloud providers are investing heavily in switching tools and portability features to comply with regulations, but this investment comes at the expense of new service development. AWS delayed several new AI services to focus on compliance, while Azure redirected engineering resources from feature development to portability tools.
The SME segment, supposedly the primary beneficiary of these regulations, remains largely unchanged. The 41 per cent of European SMEs using cloud services in 2024 has grown only marginally, and most remain on single-cloud architectures. The promise of easy switching hasn't materialised into increased cloud adoption or more aggressive price shopping.
Pricing has evolved in unexpected ways. While egress fees have disappeared, other costs have mysteriously increased. API call charges, request fees, and premium support costs have all risen by 10-15 per cent across major providers. The overall cost of cloud services continues to rise, just through different line items.
Case Studies in Frustration
The true impact of the Data Act's cloud provisions becomes clear when examining specific cases of European SMEs attempting to navigate the new landscape. These aren't hypothetical scenarios but real challenges faced by companies trying to optimise their cloud strategies in 2025.
Case 1: The FinTech That Couldn't Leave
A Berlin-based payment processing startup with 75 employees had built their platform on Google Cloud Platform starting in 2020. By 2024, they were processing €2 billion in transactions annually, with cloud costs exceeding €600,000 per year. When Azure offered them a 40 per cent discount to switch, including free migration services, it seemed like a no-brainer.
The technical audit revealed the challenge. Their core transaction processing system relied on Google's Spanner database, a globally distributed SQL database with unique consistency guarantees. No equivalent service existed on Azure. Migrating would mean either accepting lower consistency guarantees (risking financial errors) or building custom synchronisation logic (adding months of development).
Their fraud detection system used Google's AutoML to continuously retrain models based on transaction patterns. Moving to Azure meant rebuilding the entire ML pipeline using different tools, with no guarantee the models would perform identically. Even small variations in fraud detection accuracy could cost millions in losses or false positives.
The regulatory compliance added another layer. Their payment processing licence from BaFin (German financial regulator) specifically referenced their Google Cloud infrastructure in security assessments. Switching providers would trigger a full re-audit, taking 6-12 months during which they couldn't onboard new enterprise clients.
After four months of analysis and a €50,000 consulting bill, they concluded switching would cost €2.3 million in direct costs, risk €10 million in revenue during the transition, and potentially compromise their fraud detection capabilities. They remained on Google Cloud, negotiating a modest 15 per cent discount instead.
Case 2: The AI Startup Trapped by Innovation
A Copenhagen-based computer vision startup had built their product using AWS SageMaker, training models to analyse medical imaging for early disease detection. With 30 employees and €5 million in funding, they were spending €80,000 monthly on AWS, primarily on GPU instances for model training.
When Google Cloud offered them $200,000 in credits plus access to TPUs that could potentially accelerate their training by 3x, the opportunity seemed transformative. The faster training could accelerate their product development by months, a crucial advantage in the competitive medical AI space.
The migration analysis was sobering. Their training pipeline used SageMaker's distributed training features, which orchestrated work across multiple GPU instances using AWS-specific networking and storage optimisations. Recreating this on Google Cloud would require rewriting their entire training infrastructure.
Their model versioning and experiment tracking relied on SageMaker Experiments, with 18 months of experimental history including thousands of training runs. This data existed in proprietary formats that couldn't be exported meaningfully. Moving to Google would mean losing their experimental history or maintaining two separate systems.
The inference infrastructure was even more locked in. They used SageMaker Endpoints with custom containers, auto-scaling policies, and A/B testing configurations developed over two years. Their customers' systems integrated with these endpoints using AWS-specific authentication and API calls. Switching would require all customers to update their integrations.
The knockout blow came from their regulatory strategy. They were pursuing FDA approval in the US and CE marking in Europe for their medical device software. The regulatory submissions included detailed documentation of their AWS infrastructure. Changing providers would require updating all documentation and potentially restarting some validation processes, delaying regulatory approval by 12-18 months.
They stayed on AWS, using the Google Cloud offer as leverage to negotiate better GPU pricing, but remaining fundamentally locked into their original choice.
Case 3: The E-commerce Platform's Multi-Cloud Nightmare
A Madrid-based e-commerce platform decided to embrace a multi-cloud strategy to avoid lock-in. They would run their web application on AWS, their data analytics on Google Cloud, and their machine learning workloads on Azure. In theory, this would let them use each provider's strengths while maintaining negotiating leverage.
The reality was a disaster. Data synchronisation between clouds consumed enormous bandwidth, with egress charges (only waived for complete exit, not ongoing transfers) adding €40,000 monthly to their bill. The networking complexity required expensive direct connections between cloud providers, adding another €15,000 monthly.
Managing identity and access across three platforms became a security nightmare. Each provider had different IAM models, making it impossible to maintain consistent security policies. They needed three separate teams with platform-specific expertise, tripling their DevOps costs.
The promised best-of-breed approach failed to materialise. Instead of using each platform's strengths, they were limited to the lowest common denominator services that worked across all three. Advanced features from any single provider were off-limits because they would create lock-in.
After 18 months, they calculated that their multi-cloud strategy was costing 240 per cent more than running everything on a single provider would have cost. They abandoned the approach, consolidating back to AWS, having learned that multi-cloud was a luxury only large enterprises could afford.
The Innovation Paradox
One of the most unexpected consequences of the Data Act's cloud provisions has been their impact on innovation. Requirements designed to promote competition and innovation have, paradoxically, created incentives that slow technological progress and discourage the adoption of cutting-edge services.
The portability requirement has pushed cloud providers toward standardisation, but standardisation is the enemy of innovation. When providers must ensure their services can be easily replaced by competitors' offerings, they're incentivised to build generic, commodity services rather than differentiated, innovative solutions.
Consider serverless computing. AWS Lambda pioneered the function-as-a-service model with unique triggers, execution models, and integration patterns. Under pressure to ensure portability, AWS now faces a choice: continue innovating with Lambda-specific features that customers love but create lock-in, or limit Lambda to generic features that work similarly to Azure Functions and Google Cloud Functions.
The same dynamic plays out across the cloud stack. Managed databases, AI services, IoT platforms—all face pressure to converge on common features rather than differentiate. This commoditisation might reduce lock-in, but it also reduces the innovation that made cloud computing transformative in the first place.
For SMEs, this creates a cruel irony. The regulations meant to protect them from lock-in are depriving them of the innovative services that could give them competitive advantages. A startup that could previously leverage cutting-edge AWS services to compete with larger rivals now finds those services either unavailable or watered down to ensure portability.
The investment calculus for cloud providers has fundamentally changed. Why invest billions developing a revolutionary new service if regulations will require you to ensure competitors can easily replicate it? The return on innovation investment has decreased, leading providers to focus on operational efficiency rather than breakthrough capabilities.
This has particularly impacted AI services, where innovation happens at breakneck pace. Cloud providers are hesitant to release experimental AI capabilities that might create lock-in, even when those capabilities could provide enormous value to customers. The result is a more conservative approach to AI service development, with providers waiting for standards to emerge rather than pushing boundaries.
The open-source community, which might have benefited from increased demand for portable solutions, has struggled to keep pace. Projects like Kubernetes have shown that open-source can create portable platforms, but the complexity of modern cloud services exceeds what volunteer-driven projects can reasonably maintain. The result is a gap between what cloud providers offer and what portable alternatives provide.
The Path Forward
As we stand at this crossroads of regulation and reality, it's clear that the EU Data Act alone cannot solve the cloud lock-in problem. But this doesn't mean the situation is hopeless. A combination of regulatory evolution, technical innovation, and market dynamics could gradually improve cloud portability, though the path forward is more complex than regulators initially imagined.
First, regulations need to become more sophisticated. The Data Act's focus on egress fees and switching processes addresses symptoms rather than causes. Future regulations should tackle the root causes of lock-in: API incompatibility, proprietary service architectures, and the lack of meaningful standards. This might mean mandating open-source implementations of core services, requiring providers to support competitor APIs, or creating financial incentives for true interoperability.
The industry needs real standards, not just documentation. The current requirement for “structured, commonly used, and machine-readable formats” is too vague. Europe could lead by creating a Cloud Portability Standards Board with teeth—the power to certify services as truly portable and penalise those that aren't. These standards should cover not just data formats but API specifications, service behaviours, and operational patterns.
Technical innovation could provide solutions where regulation falls short. Container technologies and Kubernetes have shown that some level of portability is possible. The next generation of abstraction layers—perhaps powered by AI that can automatically translate between cloud providers—could make switching more feasible. Investment in these technologies should be encouraged through tax incentives and research grants.
For SMEs, the immediate solution isn't trying to maintain pure portability but building switching options into their architecture from the start. This means using cloud services through abstraction layers where possible, maintaining detailed documentation of dependencies, and regularly assessing the cost of switching as a risk metric. It's not about being cloud-agnostic but about being cloud-aware.
The market itself may provide solutions. As cloud costs continue to rise and lock-in concerns grow, there's increasing demand for truly portable solutions. Companies that can credibly offer easy switching will gain competitive advantage. We're already seeing this with edge computing providers positioning themselves as the “Switzerland” of cloud—neutral territories where workloads can run without lock-in.
Education and support for SMEs need dramatic improvement. Most small companies don't understand cloud lock-in until it's too late. EU and national governments should fund cloud literacy programmes, provide free architectural reviews, and offer grants for companies wanting to improve their cloud portability. The Finnish government's cloud education programme, which has trained over 10,000 SME employees, provides a model worth replicating.
The procurement power of governments could drive change. If EU government contracts required true portability—with regular switching exercises to prove it—providers would have enormous incentives to improve. The public sector, spending billions on cloud services, could be the forcing function for real interoperability.
Financial innovations could address the economic barriers to switching. Cloud migration insurance, switching loans, and portability bonds could help SMEs manage the financial risk of changing providers. The European Investment Bank could offer preferential rates for companies improving their cloud portability, turning regulatory goals into financial incentives.
The role of AI in solving the portability problem shouldn't be underestimated. Large language models are already capable of translating between programming languages and could potentially translate between cloud platforms. AI-powered migration tools that can automatically convert AWS CloudFormation templates to Azure ARM templates, or redesign architectures for different platforms, could dramatically reduce switching costs.
Finally, expectations need to be reset. Perfect portability is neither achievable nor desirable. Some level of lock-in is the price of innovation and efficiency. The goal shouldn't be to eliminate lock-in entirely but to ensure it's proportionate, transparent, and not abused. Companies should be able to switch providers when the benefits outweigh the costs, not necessarily switch at zero cost.
The Long Game of Cloud Liberation
As the morning fog lifts over Brussels, nine months after the EU Data Act's cloud provisions took effect, the landscape looks remarkably similar to before. The hyperscalers still dominate. SMEs still struggle with lock-in. AI workloads remain firmly anchored to their original platforms. The revolution, it seems, has been postponed.
But revolutions rarely happen overnight. The Data Act represents not the end of the cloud lock-in story but the beginning of a longer journey toward a more competitive, innovative, and fair cloud market. The elimination of egress fees, while insufficient on its own, has established a principle: artificial barriers to switching are unacceptable. The requirements for documentation, standardisation, and support during switching, while imperfect, have started important conversations about interoperability.
The real impact may be generational. Today's startups, aware of lock-in risks from day one, are building with portability in mind. Tomorrow's cloud services, designed under regulatory scrutiny, will be more open by default. The technical innovations sparked by portability requirements—better abstraction layers, improved migration tools, emerging standards—will gradually make switching easier.
For Europe's SMEs, the lesson is clear: cloud lock-in isn't a problem that regulation alone can solve. It requires a combination of smart architectural choices, continuous assessment of switching costs, and realistic expectations about the tradeoffs between innovation and portability. The companies that thrive will be those that understand lock-in as a risk to be managed, not a fate to be accepted.
The hyperscalers, for their part, face a delicate balance. They must continue innovating to justify their premium prices while gradually opening their platforms to avoid further regulatory intervention. The smart money is on a gradual evolution toward “cooperatition”—competing fiercely on innovation while cooperating on standards and interoperability.
The European Union's bold experiment in regulating cloud portability may not have achieved its immediate goals, but it has fundamentally changed the conversation. Cloud lock-in has moved from an accepted reality to a recognised problem requiring solutions. The pressure for change is building, even if the timeline is longer than regulators hoped.
As we look toward 2027, when egress fees will be completely prohibited and the full force of the Data Act will be felt, the cloud landscape will undoubtedly be different. Not transformed overnight, but evolved through thousands of small changes—each migration made slightly easier, each lock-in mechanism slightly weakened, each SME slightly more empowered.
The great cloud escape may not be happening today, but the tunnel is being dug, one regulation, one innovation, one migration at a time. For Europe's SMEs trapped in Big Tech's gravitational pull, that's not the immediate liberation they hoped for, but it's progress nonetheless. And in the long game of technological sovereignty and market competition, progress—however incremental—is what matters.
The morning fog has lifted completely now, revealing not a transformed landscape but a battlefield where the terms of engagement have shifted. The war for cloud freedom is far from over, but for the first time, the defenders of lock-in are playing defence. That alone makes the EU Data Act, despite its limitations, a watershed moment in the history of cloud computing.
The question isn't whether SMEs will eventually escape Big Tech's gravitational pull—it's whether they'll still be in business when genuine portability finally arrives. For Europe's digital economy, racing against time while shackled to American infrastructure, that's the six-million-company question that will define the next decade of innovation, competition, and technological sovereignty.
In the end, the EU Data Act's cloud provisions may be remembered not for the immediate changes they brought, but for the future they made possible—a future where switching cloud providers is as simple as changing mobile operators, where innovation and lock-in are decoupled, and where SMEs can compete on merit rather than being held hostage by their infrastructure choices. That future isn't here yet, but for the first time, it's visible on the horizon.
And sometimes, in the long arc of technological change, visibility is victory enough.
References and Further Information
- European Commission. (2024). “Data Act Explained.” Digital Strategy. https://digital-strategy.ec.europa.eu/en/factpages/data-act-explained
- Latham & Watkins. (2025). “EU Data Act: Significant New Switching Requirements Due to Take Effect for Data Processing Services.” https://www.lw.com/insights
- UK Competition and Markets Authority. (2024). “Cloud Services Market Investigation.”
- AWS. (2024). “Free Data Transfer Out to Internet.” AWS News Blog.
- Microsoft Azure. (2024). “Azure Egress Waiver Programme Announcement.”
- Google Cloud. (2024). “Eliminating Data Transfer Fees for Customers Leaving Google Cloud.”
- Gartner. (2024). “Cloud Services Market Share Report Q4 2024.”
- European Cloud Initiative. (2024). “SME Cloud Adoption Report 2024.”
- IEEE. (2024). “Technical Barriers to Cloud Portability: A Systematic Review.”
- AI Infrastructure Alliance. (2024). “The State of AI Infrastructure at Scale.”
- Forrester Research. (2024). “The True Cost of Cloud Switching for European Enterprises.”
- McKinsey & Company. (2024). “Cloud Migration Opportunity: Business Value and Challenges.”
- IDC. (2024). “European Cloud Services Market Analysis.”
- Cloud Native Computing Foundation. (2024). “Multi-Cloud and Portability Survey 2024.”
- European Investment Bank. (2024). “Financing Digital Transformation in European SMEs.”
Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0000-0002-0156-9795 Email: tim@smarterarticles.co.uk