SmarterArticles

aigovernance

Every morning across corporate offices worldwide, a familiar digital routine unfolds. Company email, check. Slack, check. Salesforce, check. And then, in separate browser windows that never appear in screen-sharing sessions, ChatGPT Plus launches. Thousands of employees are paying the £20 monthly subscription themselves. Their managers don't know. IT certainly doesn't know. But productivity metrics tell a different story.

This pattern represents a quiet revolution happening across the modern workplace. It's not a coordinated rebellion, but rather millions of individual decisions made by workers who've discovered that artificial intelligence can dramatically amplify their output. The numbers are staggering: 75% of knowledge workers now use AI tools at work, with 77% of employees pasting data into generative AI platforms. And here's the uncomfortable truth keeping chief information security officers awake at night: 82% of that activity comes from unmanaged accounts.

Welcome to the era of Shadow AI, where the productivity revolution and the security nightmare occupy the same space.

The Productivity Paradox

The case for employee-driven AI adoption isn't theoretical. It's measurably transforming how work gets done. Workers are 33% more productive in each hour they use generative AI, according to research from the Federal Reserve. Support agents handle 13.8% more enquiries per hour. Business professionals produce 59% more documents per hour. Programmers complete 126% more coding projects weekly.

These aren't marginal improvements. They're the kind of productivity leaps that historically required fundamental technological shifts: the personal computer, the internet, mobile devices. Except this time, the technology isn't being distributed through carefully managed IT programmes. It's being adopted through consumer accounts, personal credit cards, and a tacit understanding amongst employees that it's easier to ask forgiveness than permission.

“The worst possible thing would be one of our employees taking customer data and putting it into an AI engine that we don't manage,” says Sam Evans, chief information security officer at Clearwater Analytics, the investment management software company overseeing £8.8 trillion in assets. His concern isn't hypothetical. In 2023, Samsung engineers accidentally leaked sensitive source code and internal meeting notes into ChatGPT whilst trying to fix bugs and summarise documents. Apple responded to similar concerns by banning internal staff from using ChatGPT and GitHub Copilot in 2024, citing data exposure risks.

But here's where the paradox deepens. When Samsung discovered the breach, they didn't simply maintain the ban. After the initial lockdown, they began developing in-house AI tools, eventually creating their own generative AI model called Gauss and integrating AI into their products through partnerships with Google and NVIDIA. The message was clear: the problem wasn't AI itself, but uncontrolled AI.

The financial services sector demonstrates this tension acutely. Goldman Sachs, Wells Fargo, Deutsche Bank, JPMorgan Chase, and Bank of America have all implemented strict AI usage policies. Yet “implemented” doesn't mean “eliminated.” It means the usage has gone underground, beyond the visibility of IT monitoring tools that weren't designed to detect AI application programming interfaces. The productivity gains are too compelling for employees to ignore, even when policy explicitly prohibits usage.

The question facing organisations isn't whether AI will transform their workforce. That transformation is already happening, with or without official approval. The question is whether companies can create frameworks that capture the productivity gains whilst managing the risks, or whether the gap between corporate policy and employee reality will continue to widen.

The Security Calculus That Doesn't Add Up

The security concerns aren't hypothetical hand-wringing. They're backed by genuinely alarming statistics. Generative AI tools have become the leading channel for corporate-to-personal data exfiltration, responsible for 32% of all unauthorised data movement. And 27.4% of corporate data employees input into AI tools is classified as sensitive, up from 10.7% a year ago.

Break down that sensitive data, and the picture becomes even more concerning. Customer support interactions account for 16.3%, source code for 12.7%, research and development material for 10.8%, and unreleased marketing material for 6.6%. When Obsidian Security surveyed organisations, they found that over 50% have at least one shadow AI application running on their networks. These aren't edge cases. This is the new normal.

“When employees paste confidential meeting notes into an unvetted chatbot for summarisation, they may unintentionally hand over proprietary data to systems that could retain and reuse it, such as for training,” explains Anton Chuvakin, security adviser at Google Cloud's Office of the CISO. The risk isn't just about today's data breach. It's about permanently encoding your company's intellectual property into someone else's training data.

Yet here's what makes the security calculation so fiendishly difficult: the risks are probabilistic and diffuse, whilst the productivity gains are immediate and concrete. A marketing team that can generate campaign concepts 40% faster sees that value instantly. The risk that proprietary data might leak into an AI training set? That's a future threat with unclear probability and impact.

This temporal and perceptual asymmetry creates a perfect storm for shadow adoption. Employees see colleagues getting more done, faster. They see AI becoming fluent in tasks that used to consume hours. And they make the rational individual decision to start using these tools, even if it creates collective organisational risk. The benefit is personal and immediate. The risk is organisational and deferred.

“Management sees the productivity gains related to AI but doesn't necessarily see the associated risks,” one virtual CISO observed in a cybersecurity industry survey. This isn't a failure of leadership intelligence. It's a reflection of how difficult it is to quantify and communicate probabilistic risks that might materialise months or years after the initial exposure.

Consider the typical employee's perspective. If using ChatGPT to draft emails or summarise documents makes them 30% more efficient, that translates directly to better performance reviews, more completed projects, and reduced overtime. The chance that their specific usage causes a data breach? Statistically tiny. From their vantage point, the trade-off is obvious.

From the organisation's perspective, however, the mathematics shift dramatically. When 93% of employees input company data into unauthorised AI tools, with 32% sharing confidential client information and 37% exposing private internal data, the aggregate risk becomes substantial. It's not about one employee's usage. It's about thousands of daily interactions, any one of which could trigger regulatory violations, intellectual property theft, or competitive disadvantage.

This is the asymmetry that makes shadow AI so intractable. The people benefiting from the productivity gains aren't the same people bearing the security risks. And the timeline mismatch means decisions made today might not manifest consequences until quarters or years later, long after the employee who made the initial exposure has moved on.

The Literacy Gap That Changes Everything

Whilst security teams and employees wage this quiet battle over AI tool adoption, a more fundamental shift is occurring. AI literacy has become a baseline professional skill in a way that closely mirrors how computer literacy evolved from specialised knowledge to universal expectation.

The numbers tell the story. Generative AI adoption in the workplace skyrocketed from 22% in 2023 to 75% in 2024. But here's the more revealing statistic: 74% of workers say a lack of training is holding them back from effectively using AI. Nearly half want more formal training and believe it's the best way to boost adoption. They're not asking permission to use AI. They're asking to be taught how to use it better.

This represents a profound reversal of the traditional IT adoption model. For decades, companies would evaluate technology, purchase it, deploy it, and then train employees to use it. The process flowed downward from decision-makers to end users. With AI, the flow has inverted. Employees are developing proficiency at home, using consumer tools like ChatGPT, Midjourney, and Claude. They're learning prompt engineering through YouTube tutorials and Reddit threads. They're sharing tactics in Slack channels and Discord servers.

By the time they arrive at work, they already possess skills that their employers haven't yet figured out how to leverage. Research from IEEE shows that AI literacy encompasses four dimensions: technology-related capabilities, work-related capabilities, human-machine-related capabilities, and learning-related capabilities. Employees aren't just learning to use AI tools. They're developing an entirely new mode of work that treats AI as a collaborative partner rather than a static application.

The hiring market has responded faster than corporate policy. More than half of surveyed recruiters say they wouldn't hire someone without AI literacy skills, with demand increasing more than sixfold in the past year. IBM's 2024 Global AI Adoption Index found that 40% of workers will need new job skills within three years due to AI-driven changes.

This creates an uncomfortable reality for organisations trying to enforce restrictive AI policies. You're not just fighting against productivity gains. You're fighting against professional skill development. When employees use shadow AI tools, they're not only getting their current work done faster. They're building the capabilities that will define their future employability.

“AI has added a whole new domain to the already extensive list of things that CISOs have to worry about today,” notes Matt Hillary, CISO of Drata, a security and compliance automation platform. But the domain isn't just technical. It's cultural. The question isn't whether your workforce will become AI-literate. It's whether they'll develop that literacy within your organisational framework or outside it.

When employees learn AI capabilities through consumer tools, they develop expectations about what those tools should do and how they should work. Enterprise AI offerings that are clunkier, slower, or less capable face an uphill battle for adoption. Employees have a reference point, and it's ChatGPT, not your internal AI pilot programme.

The Governance Models That Actually Work

The tempting response to shadow AI is prohibition. Lock it down. Block the domains. Monitor the traffic. Enforce compliance through technical controls and policy consequences. This is the instinct of organisations that have spent decades building security frameworks designed to create perimeters around approved technology.

The problem is that prohibition doesn't actually work. “If you ban AI, you will have more shadow AI and it will be harder to control,” warns Anton Chuvakin from Google Cloud. Employees who believe AI tools are essential to their productivity will find ways around the restrictions. They'll use personal devices, cellular connections, and consumer VPNs. The technology moves underground, beyond visibility and governance.

The organisations finding success are pursuing a fundamentally different approach: managed enablement. Instead of asking “how do we prevent AI usage,” they're asking “how do we provide secure AI capabilities that meet employee needs?”

Consider how Microsoft's Power Platform evolved at Centrica, the British multinational energy company. The platform grew from 300 applications in 2019 to over 800 business solutions, supporting nearly 330 makers and 15,000 users across the company. This wasn't uncontrolled sprawl. It was managed growth, with a centre of excellence maintaining governance whilst enabling innovation. The model provides a template: create secure channels for innovation rather than leaving employees to find their own.

Salesforce has taken a similar path with its enterprise AI offerings. After implementing structured AI adoption across its software development lifecycle, the company saw team delivery output surge by 19% in just three months. The key wasn't forcing developers to abandon AI tools. It was providing AI capabilities within a governed framework that addressed security and compliance requirements.

The success stories share common elements. First, they acknowledge that employee demand for AI tools is legitimate and productivity-driven. Second, they provide alternatives that are genuinely competitive with consumer tools in capability and user experience. Third, they invest in education and enablement rather than relying solely on policy and restriction.

Stavanger Kommune in Norway worked with consulting firm Bouvet to build its own Azure data platform with comprehensive governance covering Power BI, Power Apps, Power Automate, and Azure OpenAI. DBS Bank in Singapore collaborated with the Monetary Authority to develop AI governance frameworks that delivered SGD £750 million in economic value in 2024, with projections exceeding SGD £1 billion by 2025.

These aren't small pilot projects. They're enterprise-wide transformations that treat AI governance as a business enabler rather than a business constraint. The governance frameworks aren't designed to say “no.” They're designed to say “yes, and here's how we'll do it safely.”

Sam Evans from Clearwater Analytics summarises the mindset shift: “This isn't just about blocking, it's about enablement. Bring solutions, not just problems. When I came to the board, I didn't just highlight the risks. I proposed a solution that balanced security with productivity.”

The alternative is what security professionals call the “visibility gap.” Whilst 91% of employees say their organisations use at least one AI technology, only 23% of companies feel prepared to manage AI governance, and just 20% have established actual governance strategies. The remaining 77% are essentially improvising, creating policy on the fly as problems emerge rather than proactively designing frameworks.

This reactive posture virtually guarantees that shadow AI will flourish. Employees move faster than policy committees. By the time an organisation has debated, drafted, and distributed an AI usage policy, the workforce has already moved on to the next generation of tools.

What separates successful AI governance from theatrical policy-making is speed and relevance. If your approval process for new AI tools takes three months, employees will route around it. If your approved tools lag behind consumer offerings, employees will use both: the approved tool for compliance theatre and the shadow tool for actual work.

The Asymmetry Problem That Won't Resolve Itself

Even the most sophisticated governance frameworks can't eliminate the fundamental tension at the heart of shadow AI: the asymmetry between measurable productivity gains and probabilistic security risks.

When Unifonic, a customer engagement platform, adopted Microsoft 365 Copilot, they reduced audit time by 85%, saved £250,000 in costs, and saved two hours per day on cybersecurity governance. Organisation-wide, Copilot reduced research, documentation, and summarisation time by up to 40%. These are concrete, immediate benefits that appear in quarterly metrics and individual performance reviews.

Contrast this with the risk profile. When data exposure occurs through shadow AI, what's the actual expected loss? The answer is maddeningly unclear. Some data exposures result in no consequence. Others trigger regulatory violations, intellectual property theft, or competitive disadvantage. The distribution is heavily skewed, with most incidents causing minimal harm and a small percentage causing catastrophic damage.

Brett Matthes, CISO for APAC at Coupang, the South Korean e-commerce giant, emphasises the stakes: “Any AI solution must be built on a bedrock of strong data security and privacy. Without this foundation, its intelligence is a vulnerability waiting to be exploited.” But convincing employees that this vulnerability justifies abandoning a tool that makes them 33% more productive requires a level of trust and organisational alignment that many companies simply don't possess.

The asymmetry extends beyond risk calculation to workload expectations. Research shows that 71% of full-time employees using AI report burnout, driven not by the technology itself but by increased workload expectations. The productivity gains from AI don't necessarily translate to reduced hours or stress. Instead, they often result in expanded scope and accelerated timelines. What looks like enhancement can feel like intensification.

This creates a perverse incentive structure. Employees adopt AI tools to remain competitive with peers who are already using them. Managers increase expectations based on the enhanced output they observe. The productivity gains get absorbed by expanding requirements rather than creating slack. And through it all, the security risks compound silently in the background.

Organisations find themselves caught in a ratchet effect. Once AI-enhanced productivity becomes the baseline, reverting becomes politically and practically difficult. You can't easily tell your workforce “we know you've been 30% more productive with AI, but now we need you to go back to the old way because of security concerns.” The productivity gains create their own momentum, independent of whether leadership endorses them.

The Professional Development Wild Card

The most disruptive aspect of shadow AI may not be the productivity impact or security risks. It's how AI literacy is becoming decoupled from organisational training and credentialing.

For most of professional history, career-critical skills were developed through formal channels: university degrees, professional certifications, corporate training programmes. You learned accounting through CPA certification. You learned project management through PMP courses. You learned software development through computer science degrees. The skills that mattered for your career came through validated, credentialed pathways.

AI literacy is developing through a completely different model. YouTube tutorials, ChatGPT experimentation, Reddit communities, Discord servers, and Twitter threads. The learning is social, iterative, and largely invisible to employers. When an employee becomes proficient at prompt engineering or learns to use AI for code generation, there's no certificate to display, no course completion to list on their CV, no formal recognition at all.

Yet these skills are becoming professionally decisive. Gallup found that 45% of employees say their productivity and efficiency have improved because of AI, with the same percentage of chief human resources officers reporting organisational efficiency improvements. The employees developing AI fluency are becoming more valuable whilst the organisations they work for struggle to assess what those capabilities mean.

This creates a fundamental question about workforce capability development. If employees are developing career-critical skills outside organisational frameworks, using tools that organisations haven't approved and may actively prohibit, who actually controls professional development?

The traditional answer would be “the organisation controls it through hiring, training, and promotion.” But that model assumes the organisation knows what skills matter and has mechanisms to develop them. With AI, neither assumption holds. The skills are evolving too rapidly for formal training programmes to keep pace. The tools are too numerous and specialised for IT departments to evaluate and approve. And the learning happens through experimentation and practice rather than formal instruction.

When IBM surveyed enterprises about AI adoption, they found that whilst 89% of business leaders are at least familiar with generative AI, only 68% of workers have reached this level. But that familiarity gap masks a deeper capability inversion. Leaders may understand AI conceptually, but many employees already possess practical fluency from consumer tool usage.

The hiring market has begun pricing this capability. Demand for AI literacy skills has increased more than sixfold in the past year, with more than half of recruiters saying they wouldn't hire candidates without these abilities. But where do candidates acquire these skills? Increasingly, not from their current employers.

This sets up a potential spiral. Organisations that prohibit or restrict AI tool usage may find their employees developing critical skills elsewhere, making those employees more attractive to competitors who embrace AI adoption. The restrictive policy becomes a retention risk. You're not just losing productivity to shadow AI. You're potentially losing talent to companies with more progressive AI policies.

When Policy Meets Reality

So what's the actual path forward? After analysing the research, examining case studies, and evaluating expert perspectives, a consensus framework is emerging. It's not about choosing between control and innovation. It's about building systems where control enables innovation.

First, accept that prohibition fails. The data is unambiguous. When organisations ban AI tools, usage doesn't drop to zero. It goes underground, beyond the visibility of monitoring systems. Chuvakin's warning bears repeating: “If you ban AI, you will have more shadow AI and it will be harder to control.” The goal isn't elimination. It's channelling.

Second, provide legitimate alternatives that actually compete with consumer tools. This is where many enterprise AI initiatives stumble. They roll out AI capabilities that are technically secure but practically unusable, with interfaces that require extensive training, workflows that add friction, and capabilities that lag behind consumer offerings. Employees compare the approved tool to ChatGPT and choose shadow AI.

The successful examples share a common trait. The tools are genuinely good. Microsoft's Copilot deployment at Noventiq saved 989 hours on routine tasks within four weeks. Unifonic's implementation reduced audit time by 85%. These tools make work easier, not harder. They integrate with existing workflows rather than requiring new ones.

Third, invest in education as much as enforcement. Nearly half of employees say they want more formal AI training. This isn't resistance to AI. It's recognition that most people are self-taught and unsure whether they're using these tools effectively. Organisations that provide structured AI literacy programmes aren't just reducing security risks. They're accelerating productivity gains by moving employees from tentative experimentation to confident deployment.

Fourth, build governance frameworks that scale. The NIST AI Risk Management Framework and ISO 42001 standards provide blueprints. But the key is making governance continuous rather than episodic. Data loss prevention tools that can detect sensitive data flowing to AI endpoints. Regular audits of AI tool usage. Clear policies about what data can and cannot be shared with AI systems. And mechanisms for rapidly evaluating and approving new tools as they emerge.

NTT DATA's implementation of Salesforce's Agentforce demonstrates comprehensive governance. They built centralised management capabilities to ensure consistency and control across deployed agents, completed 3,500+ successful Salesforce projects, and maintain 10,000+ certifications. The governance isn't a gate that slows deployment. It's a framework that enables confident scaling.

Fifth, acknowledge the asymmetry and make explicit trade-offs. Organisations need to move beyond “AI is risky” and “AI is productive” to specific statements like “for customer support data, we accept the productivity gains of AI-assisted response drafting despite quantified risks, but for source code, the risk is unacceptable regardless of productivity benefits.”

This requires quantifying both sides of the equation. What's the actual productivity gain from AI in different contexts? What's the actual risk exposure? What controls reduce that risk, and what do those controls cost in terms of usability? Few organisations have done this analysis rigorously. Most are operating on intuition and anecdote.

The Cultural Reckoning

Beneath all the technical and policy questions lies a more fundamental cultural shift. For decades, corporate IT operated on a model of centralised evaluation, procurement, and deployment. End users consumed technology that had been vetted, purchased, and configured by experts. This model worked when technology choices were discrete, expensive, and relatively stable.

AI tools are none of those things. They're continuous, cheap (often free), and evolving weekly. The old model can't keep pace. By the time an organisation completes a formal evaluation of a tool, three newer alternatives have emerged.

This isn't just a technology challenge. It's a trust challenge. Shadow AI flourishes when employees believe their organisations can't or won't provide the tools they need to be effective. It recedes when organisations demonstrate that they can move quickly, evaluate fairly, and enable innovation within secure boundaries.

Sam Evans articulates the required mindset: “Bring solutions, not just problems.” Security teams that only articulate risks without proposing paths forward train their organisations to route around them. Security teams that partner with business units to identify needs and deliver secure capabilities become enablers rather than obstacles.

The research is clear: organisations with advanced governance structures including real-time monitoring and oversight committees are 34% more likely to see improvements in revenue growth and 65% more likely to realise cost savings. Good governance doesn't slow down AI adoption. It accelerates it by building confidence that innovation won't create catastrophic risk.

But here's the uncomfortable truth: only 18% of companies have established formal AI governance structures that apply to the whole company. The other 82% are improvising, creating policy reactively as issues emerge. In that environment, shadow AI isn't just likely. It's inevitable.

The cultural shift required isn't about becoming more permissive or more restrictive. It's about becoming more responsive. The organisations that will thrive in the AI era are those that can evaluate new tools in weeks rather than quarters, that can update policies as capabilities evolve, and that can provide employees with secure alternatives before shadow usage becomes entrenched.

The Question That Remains

After examining the productivity data, the security risks, the governance models, and the cultural dynamics, we're left with the question organisations can't avoid: If AI literacy and tool adaptation are now baseline professional skills that employees develop independently, should policy resist this trend or accelerate it?

The data suggests that resistance is futile and acceleration is dangerous, but managed evolution is possible. The organisations achieving results—Samsung building Gauss after the ChatGPT breach, DBS Bank delivering £750 million in value through governed AI adoption, Microsoft's customers seeing 40% time reductions—aren't choosing between control and innovation. They're building systems where control enables innovation.

This requires accepting several uncomfortable realities. First, that your employees are already using AI tools, regardless of policy. Second, that those tools genuinely do make them more productive. Third, that the productivity gains come with real security risks. Fourth, that prohibition doesn't eliminate the risks, it just makes them invisible. And fifth, that building better alternatives is harder than writing restrictive policies.

The asymmetry between productivity and risk won't resolve itself. The tools will keep getting better, the adoption will keep accelerating, and the potential consequences of data exposure will keep compounding. Waiting for clarity that won't arrive serves no one.

What will happen instead is that organisations will segment into two groups: those that treat employee AI adoption as a threat to be contained, and those that treat it as a capability to be harnessed. The first group will watch talent flow to the second. The second group will discover that competitive advantage increasingly comes from how effectively you can deploy AI across your workforce, not just in your products.

The workforce using AI tools in separate browser windows aren't rebels or security threats. They're the leading edge of a transformation in how work gets done. The question isn't whether that transformation continues. It's whether it happens within organisational frameworks that manage the risks or outside those frameworks where the risks compound invisibly.

There's no perfect answer. But there is a choice. And every day that organisations defer that choice, their employees are making it for them. The invisible workforce is already here, operating in browser tabs that never appear in screen shares, using tools that never show up in IT asset inventories, developing skills that never make it onto corporate training rosters.

The only question is whether organisations will acknowledge this reality and build governance around it, or whether they'll continue pretending that policy documents can stop a transformation that's already well underway. Shadow AI isn't coming. It's arrived. What happens next depends on whether companies treat it as a problem to eliminate or a force to channel.


Sources and References

  1. IBM. (2024). “What Is Shadow AI?” IBM Think Topics. https://www.ibm.com/think/topics/shadow-ai

  2. ISACA. (2025). “The Rise of Shadow AI: Auditing Unauthorized AI Tools in the Enterprise.” Industry News 2025. https://www.isaca.org/resources/news-and-trends/industry-news/2025/the-rise-of-shadow-ai-auditing-unauthorized-ai-tools-in-the-enterprise

  3. Infosecurity Magazine. (2024). “One In Four Employees Use Unapproved AI Tools, Research Finds.” https://www.infosecurity-magazine.com/news/shadow-ai-employees-use-unapproved

  4. Varonis. (2024). “Hidden Risks of Shadow AI.” https://www.varonis.com/blog/shadow-ai

  5. TechTarget. (2025). “Shadow AI: How CISOs can regain control in 2025 and beyond.” https://www.techtarget.com/searchsecurity/tip/Shadow-AI-How-CISOs-can-regain-control-in-2026

  6. St. Louis Federal Reserve. (2025). “The Impact of Generative AI on Work Productivity.” On the Economy, February 2025. https://www.stlouisfed.org/on-the-economy/2025/feb/impact-generative-ai-work-productivity

  7. Federal Reserve. (2024). “Measuring AI Uptake in the Workplace.” FEDS Notes, February 5, 2024. https://www.federalreserve.gov/econres/notes/feds-notes/measuring-ai-uptake-in-the-workplace-20240205.html

  8. Nielsen Norman Group. (2024). “AI Improves Employee Productivity by 66%.” https://www.nngroup.com/articles/ai-tools-productivity-gains/

  9. IBM. (2024). “IBM 2024 Global AI Adoption Index.” IBM Newsroom, October 28, 2024. https://newsroom.ibm.com/2025-10-28-Two-thirds-of-surveyed-enterprises-in-EMEA-report-significant-productivity-gains-from-AI,-finds-new-IBM-study

  10. McKinsey & Company. (2024). “The state of AI: How organizations are rewiring to capture value.” QuantumBlack Insights. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai

  11. Gallup. (2024). “AI Use at Work Has Nearly Doubled in Two Years.” Workplace Analytics. https://www.gallup.com/workplace/691643/work-nearly-doubled-two-years.aspx

  12. Salesforce. (2024). “How AI Literacy Builds a Future-Ready Workforce — and What Agentforce Taught Us.” Salesforce Blog. https://www.salesforce.com/blog/ai-literacy-builds-future-ready-workforce/

  13. Salesforce Engineering. (2024). “Building Sustainable Enterprise AI Adoption.” https://engineering.salesforce.com/building-sustainable-enterprise-ai-adoption-cultural-strategies-that-achieved-95-developer-engagement/

  14. World Economic Forum. (2025). “AI is shifting the workplace skillset. But human skills still count.” January 2025. https://www.weforum.org/stories/2025/01/ai-workplace-skills/

  15. IEEE Xplore. (2022). “Explicating AI Literacy of Employees at Digital Workplaces.” https://ieeexplore.ieee.org/document/9681321/

  16. Google Cloud Blog. (2024). “Cloud CISO Perspectives: APAC security leaders speak out on AI.” https://cloud.google.com/blog/products/identity-security/cloud-ciso-perspectives-apac-security-leaders-speak-out-on-ai

  17. VentureBeat. (2024). “CISO dodges bullet protecting $8.8 trillion from shadow AI.” https://venturebeat.com/security/ciso-dodges-bullet-protecting-8-8-trillion-from-shadow-ai

  18. Obsidian Security. (2024). “Why Shadow AI and Unauthorized GenAI Tools Are a Growing Security Risk.” https://www.obsidiansecurity.com/blog/why-are-unauthorized-genai-apps-risky

  19. Cyberhaven. (2024). “Managing shadow AI: best practices for enterprise security.” https://www.cyberhaven.com/blog/managing-shadow-ai-best-practices-for-enterprise-security

  20. The Hacker News. (2025). “New Research: AI Is Already the #1 Data Exfiltration Channel in the Enterprise.” October 2025. https://thehackernews.com/2025/10/new-research-ai-is-already-1-data.html

  21. Kiteworks. (2024). “93% of Employees Share Confidential Data With Unauthorized AI Tools.” https://www.kiteworks.com/cybersecurity-risk-management/employees-sharing-confidential-data-unauthorized-ai-tools/

  22. Microsoft. (2024). “Building a foundation for AI success: Governance.” Microsoft Cloud Blog, March 28, 2024. https://www.microsoft.com/en-us/microsoft-cloud/blog/2024/03/28/building-a-foundation-for-ai-success-governance/

  23. Microsoft. (2025). “AI-powered success—with more than 1,000 stories of customer transformation and innovation.” Microsoft Cloud Blog, July 24, 2025. https://www.microsoft.com/en-us/microsoft-cloud/blog/2025/07/24/ai-powered-success-with-1000-stories-of-customer-transformation-and-innovation/

  24. Deloitte. (2024). “State of Generative AI in the Enterprise 2024.” https://www.deloitte.com/us/en/what-we-do/capabilities/applied-artificial-intelligence/content/state-of-generative-ai-in-enterprise.html

  25. NIST. (2024). “AI Risk Management Framework (AI RMF).” National Institute of Standards and Technology.

  26. InfoWorld. (2024). “Boring governance is the path to real AI adoption.” https://www.infoworld.com/article/4082782/boring-governance-is-the-path-to-real-ai-adoption.html


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...

#HumanInTheLoop #ShadowAI #AIAsWorkforce #AIGovernance

In a glass-walled conference room overlooking San Francisco's Mission Bay, Bret Taylor sits at the epicentre of what might be the most consequential corporate restructuring in technology history. As OpenAI's board chairman, the former Salesforce co-CEO finds himself orchestrating a delicate ballet between idealism and capitalism, between the organisation's founding mission to benefit humanity and its insatiable hunger for the billions needed to build artificial general intelligence. The numbers are staggering: a $500 billion valuation, a $100 billion stake for the nonprofit parent, and a dramatic reduction in partner revenue-sharing from 20% to a projected 8% by decade's end. But behind these figures lies a more fundamental question that will shape the trajectory of artificial intelligence development for years to come: Who really controls the future of AI?

As autumn 2025 unfolds, OpenAI's restructuring has become a litmus test for how humanity will govern its most powerful technologies. The company that unleashed ChatGPT upon the world is transforming itself from a peculiar nonprofit-controlled entity into something unprecedented—a public benefit corporation still governed by its nonprofit parent, armed with one of the largest philanthropic war chests in history. It's a structure that attempts to thread an impossible needle: maintaining ethical governance whilst competing in an arms race that demands hundreds of billions in capital.

The stakes couldn't be higher. As AI systems approach human-level capabilities across multiple domains, the decisions made in OpenAI's boardroom ripple outward, affecting everything from who gets access to frontier models to how much businesses pay for AI services, from safety standards that could prevent catastrophic risks to the concentration of power in Silicon Valley's already formidable tech giants.

The Evolution of a Paradox

OpenAI's journey from nonprofit research lab to AI powerhouse reads like a Silicon Valley fever dream. Founded in 2015 with a billion-dollar pledge and promises to democratise artificial intelligence, the organisation quickly discovered that its noble intentions collided head-on with economic reality. Training state-of-the-art AI models doesn't just require brilliant minds—it demands computational resources that would make even tech giants blanch.

The creation of OpenAI's “capped-profit” subsidiary in 2019 was the first compromise, a Frankenstein structure that attempted to marry nonprofit governance with for-profit incentives. Investors could earn returns, but those returns were capped at 100 times their investment—a limit that seemed generous until the AI boom made it look quaint. Microsoft's initial investment that year, followed by billions more, fundamentally altered the organisation's trajectory.

By 2024, the capped-profit model had become a straitjacket. Sam Altman, OpenAI's CEO, told employees in September of that year that the company had “effectively outgrown” its convoluted structure. The nonprofit board maintained ultimate control, but the for-profit subsidiary needed to raise hundreds of billions—eventually trillions, according to Altman—to achieve its ambitious goals. Something had to give.

The initial restructuring plan, floated in late 2024 and early 2025, would have severed the nonprofit's control entirely, transforming OpenAI into a traditional for-profit entity with the nonprofit receiving a minority stake. This proposal triggered a firestorm of criticism. Elon Musk, OpenAI's co-founder turned bitter rival, filed multiple lawsuits claiming the company had betrayed its founding mission. Meta petitioned California's attorney general to block the move. Former employees raised alarms about the concentration of power and potential abandonment of safety commitments.

Then came the reversal. In May 2025, after what Altman described as “hearing from civic leaders and having discussions with the offices of the Attorneys General of California and Delaware,” OpenAI announced a dramatically different plan. The nonprofit would retain control, but the for-profit arm would transform into a public benefit corporation—a structure that legally requires balancing shareholder returns with public benefit.

The Anatomy of the Deal

The restructuring announced in September 2025 represents a masterclass in financial engineering and political compromise. At its core, the deal attempts to solve OpenAI's fundamental paradox: how to raise massive capital whilst maintaining mission-driven governance.

The headline figure—a $100 billion equity stake for the nonprofit parent—is deliberately eye-catching. At OpenAI's current $500 billion valuation, this represents approximately 20% ownership, making the nonprofit “one of the most well-resourced philanthropic organisations in the world,” according to the company. But this figure is described as a “floor that could increase,” suggesting the nonprofit's stake might grow as the company's valuation rises.

The public benefit corporation structure, already adopted by rival Anthropic, creates a legal framework that explicitly acknowledges dual objectives. Unlike traditional corporations that must maximise shareholder value, PBCs can—and must—consider broader stakeholder interests. For OpenAI, this means decisions about model deployment, safety measures, and access can legally prioritise social benefit over profit maximisation.

The governance structure adds another layer of complexity. The nonprofit board will continue as “the overall governing body for all OpenAI activities,” according to company statements. The PBC will have its own board, but crucially, the nonprofit will appoint those directors. Initially, both boards will have identical membership, though this could diverge over time.

Perhaps most intriguingly, the deal includes a renegotiation of OpenAI's relationship with Microsoft, its largest investor and cloud computing partner. The companies signed a “non-binding memorandum of understanding” that fundamentally alters their arrangement. Microsoft's exclusive access to OpenAI's models shifts to a “right of first refusal” model, and the revenue-sharing agreement sees a dramatic reduction—from the current 20% to a projected 8% by 2030.

This reduction in Microsoft's take represents tens of billions in additional revenue that OpenAI will retain. For Microsoft, which has invested over $13 billion in the company, it's a significant concession. But it also reflects a shifting power dynamic: OpenAI no longer needs Microsoft as desperately as it once did, and Microsoft has begun hedging its bets with investments in other AI companies.

The Power Shuffle

Understanding who gains and loses influence in this restructuring requires mapping a complex web of stakeholders, each with distinct interests and leverage points.

The Nonprofit Board: Philosophical Guardians

The nonprofit board emerges with remarkable staying power. Despite months of speculation that they would be sidelined, board members retain ultimate control over OpenAI's direction. With a $100 billion stake providing financial independence, the nonprofit can pursue its mission without being beholden to donors or commercial pressures.

Yet questions remain about the board's composition and decision-making processes. The current board includes Bret Taylor as chair, Sam Altman as CEO, and a mix of technologists, academics, and business leaders. Critics argue that this group lacks sufficient AI safety expertise and diverse perspectives. The board's track record—including the chaotic November 2023 attempt to fire Altman that nearly destroyed the company—raises concerns about its ability to navigate complex governance challenges.

Sam Altman: The Architect

Altman's position appears strengthened by the restructuring. He successfully navigated pressure from multiple directions—investors demanding returns, employees seeking liquidity, regulators scrutinising the nonprofit structure, and critics alleging mission drift. The PBC structure gives him more flexibility to raise capital whilst maintaining the “not normal company” ethos he champions.

But Altman's power isn't absolute. The nonprofit board's continued oversight means he must balance commercial ambitions with mission alignment. The presence of state attorneys general as active overseers adds another check on executive authority. “We're building something that's never been built before,” Altman told employees during the restructuring announcement, “and that requires a structure that's never existed before.”

Microsoft: The Pragmatic Partner

Microsoft's position is perhaps the most nuanced. On paper, the company loses significant revenue-sharing rights and exclusive access to OpenAI's technology. The reduction from 20% to 8% revenue sharing alone could cost Microsoft tens of billions over the coming years.

Yet Microsoft has been preparing for this shift. The company announced an $80 billion AI infrastructure investment for 2025, building computing clusters six to ten times larger than those used for its initial models. It's developing relationships with alternative AI providers, including xAI, Mistral, and Meta's Llama. Microsoft's approval of OpenAI's restructuring, despite the reduced benefits, suggests a calculated decision to maintain influence whilst diversifying its AI portfolio.

Employees: The Beneficiaries

OpenAI's employees stand to benefit significantly from the restructuring. The shift to a PBC structure makes employee equity more valuable and liquid than under the capped-profit model. Reports suggest employees will be able to sell shares at the $500 billion valuation, creating substantial wealth for early team members.

This financial incentive helps OpenAI compete for talent against deep-pocketed rivals. With Meta offering individual researchers compensation packages worth over $1.5 billion and Google, Microsoft, and others engaged in fierce bidding wars, the ability to offer meaningful equity has become crucial.

Competitors: The Watchers

The restructuring sends ripples through the AI industry. Anthropic, already structured as a PBC with its Long-Term Benefit Trust, sees validation of its governance model. The company's CEO, Dario Amodei, has publicly advocated for federal AI regulation whilst warning against overly blunt regulatory instruments.

Meta, despite initial opposition to OpenAI's restructuring, has accelerated its own AI investments. The company reorganised its AI teams in May 2025, creating a “superintelligence team” and aggressively recruiting former OpenAI employees. Meta's open-source Llama models represent a fundamentally different approach to AI development, challenging OpenAI's more closed model.

Google, with its Gemini family of models, continues advancing its AI capabilities whilst maintaining a lower public profile. The search giant's vast resources and computing infrastructure give it staying power in the AI race, regardless of OpenAI's corporate structure.

xAI, Elon Musk's entry into the generative AI space, has positioned itself as the anti-OpenAI, promising more open development and fewer safety restrictions. Musk's lawsuits against OpenAI, whilst unsuccessful in blocking the restructuring, have kept pressure on the company to justify its governance choices.

Safety at the Crossroads

The restructuring's impact on AI safety governance represents perhaps its most consequential dimension. As AI systems grow more powerful, decisions about deployment, access, and safety measures could literally shape humanity's future. This isn't hyperbole—it's the stark reality facing anyone tasked with governing technologies that might soon match or exceed human intelligence across multiple domains.

OpenAI's track record on safety tells a complex story. The company pioneered important safety research, including work on alignment, interpretability, and robustness. Its deployment of GPT models included extensive safety testing and gradual rollouts. Yet critics point to a pattern of safety teams being dissolved or departing, with key researchers leaving for competitors or starting their own ventures. The departure of Jan Leike, who co-led the company's superalignment team, sent shockwaves through the safety community when he warned that “safety culture and processes have taken a backseat to shiny products.”

The PBC structure theoretically strengthens safety governance by enshrining public benefit as a legal obligation. Board members have fiduciary duties to consider safety alongside profits. The nonprofit's continued control means safety concerns can't be overridden by pure commercial pressures. But structural safeguards don't guarantee outcomes—they merely create frameworks within which human judgment operates.

The Summer 2025 AI Safety Index revealed that only three of seven major AI companies—OpenAI, Anthropic, and Google DeepMind—conduct substantive testing for dangerous capabilities. The report noted that “capabilities are accelerating faster than risk-management practices” with a “widening gap between firms.” This acceleration creates a paradox: the companies best positioned to develop transformative AI are also those facing the greatest competitive pressure to deploy it quickly.

California's proposed AI safety bill, SB 53, would require frontier model developers to create safety frameworks and release public safety reports before deployment. Anthropic has endorsed the legislation, whilst OpenAI's position remains more ambiguous. The bill would establish whistleblower protections and mandatory safety standards—external constraints that might prove more effective than internal governance structures.

The industry's Frontier Model Forum, established by Google, Microsoft, OpenAI, and Anthropic, represents a collaborative approach to safety. Yet voluntary initiatives have limitations that become apparent when competitive pressures mount. As Dario Amodei noted, industry standards “are not intended as a substitute for regulation, but rather a prototype for it.”

International coordination adds another layer of complexity. The UK's AI Safety Summit, the EU's AI Act, and China's AI regulations create a patchwork of requirements that global AI companies must navigate. OpenAI's governance structure must accommodate these diverse regulatory regimes whilst maintaining competitive advantages. The challenge isn't just technical—it's diplomatic, requiring the company to satisfy regulators with fundamentally different values and priorities.

The Price of Intelligence

How OpenAI's restructuring affects AI pricing and access could determine whether artificial intelligence becomes a democratising force or another driver of inequality. The mathematics of AI deployment create natural tensions between broad access and sustainable economics, tensions that the restructuring both addresses and complicates.

Currently, OpenAI's API pricing follows a tiered model that reflects the underlying computational costs. GPT-4 costs approximately $0.03 per 1,000 input tokens and $0.06 per 1,000 output tokens at list prices—rates that make extensive use expensive for smaller organisations. GPT-3.5 Turbo, roughly 30 times cheaper, offers a more accessible alternative but with reduced capabilities. This pricing structure creates a two-tier system where advanced capabilities remain expensive whilst basic AI assistance becomes commoditised.

The restructuring's financial implications suggest potential pricing changes. With Microsoft's revenue share declining from 20% to 8%, OpenAI retains more revenue to reinvest in infrastructure and research. This could enable lower prices through economies of scale, as the company captures more value from each transaction. Alternatively, reduced pressure from Microsoft might allow OpenAI to maintain higher margins, using the additional revenue to fund safety research and nonprofit activities.

Enterprise customers currently secure 15-30% discounts for large-volume commitments, creating another tier in the access hierarchy. The restructuring unlikely changes these dynamics immediately, but the PBC structure's public benefit mandate could pressure OpenAI to expand access programmes. The company already operates OpenAI for Nonprofits, offering 20% discounts on ChatGPT Business subscriptions, with larger nonprofits eligible for 25% off enterprise plans. These programmes might expand under the PBC structure, particularly given the nonprofit parent's philanthropic mission.

Competition provides the strongest force for pricing discipline. Google's Gemini, Anthropic's Claude, Meta's Llama, and emerging models from Chinese companies create alternatives that prevent any single provider from extracting monopoly rents. Meta's open-source approach, allowing free use and modification of Llama models, puts particular pressure on closed-model pricing. Yet the computational requirements for frontier models create natural barriers to competition, limiting how far prices can fall.

The democratisation question extends beyond pricing to capability access. OpenAI's most powerful models remain restricted, with full capabilities available only to select partners and researchers. The company's staged deployment approach—releasing capabilities gradually to monitor for misuse—creates additional access barriers. The PBC structure doesn't inherently change these access restrictions, but the nonprofit board's oversight could push for broader availability.

Geographic disparities persist across multiple dimensions. Advanced AI capabilities concentrate in the United States, Europe, and China, whilst developing nations struggle to access even basic AI tools. Language barriers compound these inequalities, as most frontier models perform best in English and other widely-spoken languages. OpenAI's restructuring doesn't directly address these global inequalities, though the nonprofit's enhanced resources could fund expanded access programmes.

Consider the situation in Kenya, where mobile money innovations like M-Pesa demonstrated how technology could leapfrog traditional infrastructure. AI could similarly transform education, healthcare, and agriculture—but only if accessible. Current pricing models make advanced AI prohibitively expensive for most Kenyan organisations. A teacher in Nairobi earning $200 monthly cannot afford GPT-4 access for lesson planning, whilst her counterpart in San Francisco uses AI tutoring systems worth thousands of dollars.

In Brazil, where Portuguese-language AI capabilities lag behind English models, the digital divide takes on linguistic dimensions. Small businesses in São Paulo struggle to implement AI customer service because models trained primarily on English data perform poorly in Portuguese. The restructuring's emphasis on public benefit could drive investment in multilingual capabilities, but market incentives favour languages with larger commercial markets.

India presents a different challenge. With a large English-speaking population and growing tech sector, the country has better access to current AI capabilities. Yet rural areas remain underserved, and local languages receive limited AI support. The nonprofit's resources could fund initiatives to develop AI capabilities for Hindi, Tamil, and other Indian languages, but such investments require long-term commitment beyond immediate commercial returns.

Industry Reverberations

The AI industry's response to OpenAI's restructuring reveals deeper tensions about the future of AI development and governance. Each major player faces strategic choices about how to position themselves in a landscape where the rules are being rewritten in real-time.

Microsoft's strategic pivot is particularly telling. Beyond its $80 billion infrastructure investment, the company is systematically reducing its dependence on OpenAI. Partnerships with xAI, Mistral, and consideration of Meta's Llama models create a diversified AI portfolio. Microsoft's approval of OpenAI's restructuring, despite reduced benefits, suggests confidence in its ability to compete independently. The company's CEO, Satya Nadella, framed the evolution as natural: “Partnerships evolve as companies mature. What matters is that we continue advancing AI capabilities together.”

Meta's aggressive moves reflect Mark Zuckerberg's determination to avoid dependence on external AI providers. The May 2025 reorganisation creating a “superintelligence team” and aggressive recruiting from OpenAI signal serious commitment. Meta's open-source strategy with Llama represents a fundamental challenge to OpenAI's closed-model approach, potentially commoditising capabilities that OpenAI monetises. Zuckerberg has argued that “open source AI will be safer and more beneficial than closed systems,” directly challenging OpenAI's safety-through-control approach.

Google's measured response masks significant internal developments. The Gemini family's improvements in reasoning and code understanding narrow the gap with GPT models. Google's vast infrastructure and integration with search, advertising, and cloud services provide unique advantages. The company's lower public profile might reflect confidence rather than complacency. Internal sources suggest Google views the AI race as a marathon rather than a sprint, focusing on sustainable competitive advantages rather than headline-grabbing announcements.

Anthropic's position as the “other” PBC in AI becomes more interesting post-restructuring. With both major AI labs adopting similar governance structures, the PBC model gains legitimacy. Anthropic's explicit focus on safety and its Long-Term Benefit Trust structure offer an alternative approach within the same legal framework. Dario Amodei has positioned Anthropic as the safety-first alternative, arguing that “responsible scaling requires putting safety research ahead of capability development.”

Chinese AI companies, including Baidu, Alibaba, and ByteDance, observe from a different regulatory environment. Their development proceeds under state oversight with different priorities around safety, access, and international competition. The emergence of DeepSeek-R1 in early 2025 demonstrated that Chinese AI capabilities had reached frontier levels, challenging assumptions about Western technological leadership. OpenAI's restructuring might influence Chinese policy discussions about optimal AI governance structures, particularly as Beijing considers how to balance innovation with control.

Startups face a transformed landscape. The capital requirements for frontier model development—hundreds of billions according to industry estimates—create insurmountable barriers for new entrants. Yet specialisation opportunities proliferate. Companies focusing on specific verticals, fine-tuning existing models, or developing complementary technologies find niches within the AI ecosystem. The restructuring's emphasis on public benefit could create opportunities for startups addressing underserved markets or social challenges.

The talent war intensifies with each passing month. With OpenAI offering liquidity at a $500 billion valuation, Meta making billion-dollar offers to individual researchers, and other companies competing aggressively, AI expertise commands unprecedented premiums. This concentration of talent in a few well-funded organisations could accelerate capability development whilst limiting diverse approaches. The restructuring's employee liquidity provisions help OpenAI retain talent, but also create incentives for employees to cash out and start competing ventures.

Future Scenarios

Three plausible scenarios emerge from OpenAI's restructuring, each with distinct implications for AI governance and development. These aren't predictions but rather explorations of how current trends might unfold under different conditions.

Scenario 1: The Balanced Evolution

In this optimistic scenario, the PBC structure successfully balances commercial and social objectives. The nonprofit board, armed with its $100 billion stake, funds extensive safety research and access programmes. Competition from Anthropic, Google, Meta, and others keeps prices reasonable and innovation rapid. Government regulation, informed by industry standards, creates guardrails without stifling development.

OpenAI's models become infrastructure for thousands of applications, with tiered pricing ensuring broad access. Safety incidents remain minor, building public trust. The nonprofit's resources fund AI education and deployment in developing nations. By 2030, AI augments human capabilities across industries without displacing workers en masse or creating existential risks.

This scenario requires multiple factors aligning: effective nonprofit governance, successful safety research, thoughtful regulation, and continued competition. Historical precedents for such balanced outcomes in transformative technologies are rare but not impossible. The internet's development, whilst imperfect, demonstrated how distributed governance and competition could produce broadly beneficial outcomes.

Scenario 2: The Concentration Crisis

A darker scenario sees the restructuring accelerating AI power concentration. Despite the PBC structure, commercial pressures dominate decision-making. The nonprofit board, lacking technical expertise and facing complex trade-offs, defers to management on critical decisions. Safety measures lag capability development, leading to serious incidents that trigger public backlash and heavy-handed regulation.

Microsoft, Google, and Meta match OpenAI's capabilities, but the oligopoly coordinates implicitly on pricing and access restrictions. Smaller companies can't compete with the capital requirements. AI becomes another driver of inequality, with powerful capabilities restricted to large corporations and wealthy individuals. Developing nations fall further behind, creating a global AI divide that mirrors and amplifies existing inequalities.

Government attempts at regulation prove ineffective against well-funded lobbying and regulatory capture. International coordination fails as nations prioritise competitive advantage over safety. By 2030, a handful of companies control humanity's most powerful technologies with minimal accountability.

This scenario reflects patterns seen in other concentrated industries—telecommunications, social media, cloud computing—where initial promises of democratisation gave way to oligopolistic control. The difference with AI is the stakes: concentrated control over artificial intelligence could reshape power relationships across all sectors of society.

Scenario 3: The Fragmentation Path

A third scenario involves the AI ecosystem fragmenting into distinct segments. OpenAI's restructuring succeeds internally but catalyses divergent approaches elsewhere. Meta doubles down on open-source, commoditising many AI capabilities. Chinese companies develop parallel ecosystems with different values and constraints. Specialised providers emerge for specific industries and use cases.

Regulation varies dramatically by jurisdiction. The EU implements strict safety requirements that slow deployment but ensure accountability. The US maintains lighter touch regulation prioritising innovation. China integrates AI development with state objectives. This regulatory patchwork creates complexity but also optionality.

The nonprofit's resources fund alternative AI development paths, including more interpretable systems, neuromorphic computing, and hybrid human-AI systems. No single organisation dominates, but coordination challenges multiply. Progress slows compared to concentrated development but proceeds more sustainably.

This scenario might best reflect technology industry history, where periods of concentration alternate with fragmentation driven by innovation, regulation, and changing consumer preferences. The personal computer industry's evolution from IBM dominance to diverse ecosystems provides a potential model, though AI's unique characteristics might prevent such fragmentation.

The Governance Experiment

OpenAI's restructuring represents more than corporate manoeuvring—it's an experiment in governing transformative technology. The hybrid structure, combining nonprofit oversight with public benefit obligations and commercial incentives, has no perfect precedent. This makes it both promising and risky, a test case for how humanity might govern its most powerful tools.

Traditional corporate governance assumes alignment between shareholder interests and social benefit through market mechanisms. Adam Smith's “invisible hand” supposedly guides private vice toward public virtue. This assumption breaks down for technologies with existential implications. Nuclear technology, genetic engineering, and now artificial intelligence require governance structures that explicitly balance multiple objectives.

The PBC model, whilst innovative, isn't a panacea. Anthropic's Long-Term Benefit Trust adds another layer, attempting to ensure long-term thinking beyond typical corporate time horizons. These experiments matter because traditional approaches—pure nonprofit research or unfettered commercial development—have proven inadequate for AI's unique challenges.

The advanced AI governance community, drawing from diverse research fields, has formed specifically to analyse challenges like OpenAI's restructuring. This community would view the scenario through a lens of risk and control, focusing on how the new power balance affects deployment of potentially dangerous frontier models. They advocate for systematic analysis of incentive landscapes rather than taking stated missions at face value.

International coordination remains the missing piece. No single company or country can ensure AI benefits humanity if others pursue risky development. The restructuring might catalyse discussions about international AI governance frameworks, similar to nuclear non-proliferation treaties or climate agreements. Yet the competitive dynamics of AI development make such coordination extraordinarily difficult.

The role of civil society and public input needs strengthening. Current AI governance remains largely technocratic, with decisions made by small groups of technologists, investors, and government officials. Broader public participation, whilst challenging to implement, might prove essential for legitimate and effective governance. The nonprofit's enhanced resources could fund public education and participation programmes, but only if the board prioritises such initiatives.

The Liquidity Revolution

Perhaps no aspect of OpenAI's restructuring carries more immediate impact than the unprecedented employee liquidity event unfolding alongside the governance changes. In September 2025, the company announced that eligible current and former employees could sell up to $10.3 billion in stock at a $500 billion valuation—nearly double the initial $6 billion target and representing the largest non-founder employee wealth creation event in technology history.

The terms reveal fascinating power dynamics. Previously, current employees could sell up to $10 million in shares whilst former employees faced a $2 million cap—a disparity that created tension and potential legal complications. The equalisation of these limits signals both pragmatism and necessity. With talent wars raging and competitors offering billion-dollar packages to individual researchers, OpenAI cannot afford dissatisfied alumni or current staff feeling trapped by illiquid equity.

The mathematics are staggering. At a $500 billion valuation, even a 0.01% stake translates to $50 million. Early employees who joined when the company's valuation stood in the single-digit billions now hold fortunes that rival traditional tech IPO windfalls. This wealth creation, concentrated among a few hundred individuals, will reshape Silicon Valley's power dynamics and potentially seed the next generation of AI startups.

Yet the liquidity event also raises questions about alignment and retention. Employees who cash out significant portions might feel less committed to OpenAI's long-term mission. The company must balance providing liquidity with maintaining the hunger and dedication that drove its initial breakthroughs. The tender offer's structure—limiting participation to shares held for over two years and capping individual sales—attempts this balance, but success remains uncertain.

The secondary market dynamics reveal broader shifts in technology financing. Traditional IPOs, once the primary liquidity mechanism, increasingly seem antiquated for companies achieving astronomical private valuations. OpenAI joins Stripe, SpaceX, and other decacorns in creating periodic liquidity windows whilst maintaining private control. This model advantages insiders—employees, early investors, and management—whilst excluding public market participants from the value creation.

The wealth concentration has broader implications. Hundreds of newly minted millionaires and billionaires will influence everything from real estate markets to political donations to startup funding. Many will likely start their own AI companies, potentially accelerating innovation but also fragmenting talent and knowledge. The liquidity event doesn't just change individual lives—it reshapes the entire AI ecosystem.

The Global Chessboard

OpenAI's restructuring cannot be understood without examining the international AI governance landscape evolving in parallel. The summer of 2025 witnessed a flurry of activity as nations and international bodies scrambled to establish frameworks for frontier AI models.

China's Global AI Governance Action Plan, unveiled at the July 2025 World AI Conference, positions the nation as champion of the Global South. The plan emphasises “creating an inclusive, open, sustainable, fair, safe, and secure digital and intelligent future for all”—language that subtly critiques Western AI concentration. China's commitment to holding ten AI workshops for developing nations by year's end represents soft power projection through capability building.

The emergence of DeepSeek-R1 in early 2025 transformed these dynamics overnight. The model's frontier capabilities shattered assumptions about Chinese AI lagging Western development. Chinese leaders, initially surprised by their developers' success, responded with newfound confidence—inviting AI pioneers to high-level Communist Party meetings and accelerating AI deployment across critical infrastructure.

The European Union's AI Act, with its rules for general-purpose models taking effect in August 2025, creates the world's most comprehensive AI regulatory framework. Providers of frontier models must implement risk mitigation measures, comply with transparency standards, and navigate copyright requirements. OpenAI's PBC structure, with its public benefit mandate, aligns philosophically with EU priorities, potentially easing regulatory compliance.

Yet the transatlantic relationship shows strain. The EU-US collaboration through the Transatlantic Trade and Technology Council faces uncertainty as American politics shift. California's SB 1047, focused on frontier model safety, represents state-level action filling federal regulatory gaps—a development that complicates international coordination.

The UN's attempts at creating inclusive AI governance face fundamental tensions. Resolution A/78/L.49, emphasising ethical AI principles and human rights, garnered 143 co-sponsors but lacks enforcement mechanisms. China advocates for UN-centred governance enabling “equal participation and benefit-sharing by all countries,” whilst the US prioritises bilateral partnerships and export controls.

These international dynamics directly impact OpenAI's restructuring. The company must navigate Chinese competition, EU regulation, and American political volatility whilst maintaining its technological edge. The nonprofit board's enhanced resources could fund international cooperation initiatives, but geopolitical tensions limit possibilities.

The “AI arms race” framing, explicitly embraced by US Vice President JD Vance, creates pressure for rapid capability development over safety considerations. OpenAI's PBC structure attempts to resist this pressure through governance safeguards, but market and political forces push relentlessly toward acceleration.

The Path Forward

As 2025 progresses, OpenAI's restructuring will face multiple tests. California and Delaware attorneys general must approve the nonprofit's transformation. Investors need confidence that the PBC structure won't compromise returns. The massive employee liquidity event must execute smoothly without triggering retention crises. Competitors will probe for weaknesses whilst potentially adopting similar structures.

The technical challenges remain daunting. Building artificial general intelligence, if possible, requires breakthroughs in reasoning, planning, and generalisation. The capital requirements—trillions according to some estimates—dwarf previous technology investments. Safety challenges multiply as capabilities increase, creating scenarios where single mistakes could have catastrophic consequences.

Yet the governance challenges might prove even more complex. Balancing speed with safety, access with security, and profit with purpose requires wisdom that no structure can guarantee. The restructuring creates a framework, but human judgment will determine outcomes. Board members must navigate technical complexities they may not fully understand whilst making decisions that affect billions of people.

The concentration of power remains concerning. Even with nonprofit oversight and public benefit obligations, OpenAI wields enormous influence over humanity's technological future. The company's decisions about model capabilities, deployment timing, and access policies affect billions. No governance structure can eliminate this power; it can only channel it toward beneficial outcomes.

Competition provides the most robust check on power concentration. Anthropic, Google, Meta, and emerging players must continue pushing boundaries whilst maintaining distinct approaches. Open-source alternatives, despite limitations for frontier models, preserve optionality and prevent complete capture. The health of the AI ecosystem depends on multiple viable approaches rather than convergence on a single model.

Regulatory frameworks need rapid evolution. Current approaches, designed for traditional software or industrial processes, map poorly to AI's unique characteristics. Regulation must balance innovation with safety, competition with coordination, and national interests with global benefit. The restructuring might accelerate regulatory development by providing a concrete governance model to evaluate.

Public engagement cannot remain optional. AI's implications extend far beyond Silicon Valley boardrooms. Workers facing automation, students adapting to AI tutors, patients receiving AI diagnoses, and citizens subject to AI decisions deserve input on governance structures. The nonprofit's enhanced resources could fund public education and participation programmes, but only if the board prioritises democratic legitimacy alongside technical excellence.

The Innovation Paradox

A critical tension emerges from OpenAI's restructuring that strikes at the heart of innovation theory: can breakthrough discoveries flourish within structures designed for caution and consensus? The history of transformative technologies suggests a complex relationship between governance constraints and creative breakthroughs.

Bell Labs, operating under AT&T's regulated monopoly, produced the transistor, laser, and information theory—foundational innovations that required patient capital and freedom from immediate commercial pressure. Yet the same structure that enabled these breakthroughs also slowed their deployment and limited competitive innovation. OpenAI's PBC structure, with nonprofit oversight and public benefit obligations, creates similar dynamics.

The company's researchers face an unprecedented challenge: developing potentially transformative AI systems whilst satisfying multiple stakeholders with divergent interests. The nonprofit board prioritises safety and broad benefit. Investors demand returns commensurate with their billions in capital. Employees seek both mission fulfilment and financial rewards. Regulators impose expanding requirements. Society demands both innovation and protection from risks.

This multistakeholder complexity could stifle the bold thinking required for breakthrough AI development. Committee decision-making, stakeholder management, and regulatory compliance consume time and attention that might otherwise focus on research. The most creative researchers might migrate to environments with fewer constraints—whether competitor labs, startups, or international alternatives.

Alternatively, the structure might enhance innovation by providing stability and resources unavailable elsewhere. The $100 billion nonprofit stake ensures long-term funding independent of market volatility. The public benefit mandate legitimises patient research without immediate commercial application. The governance structure protects researchers from the quarterly earnings pressure that plague public companies.

The resolution of this paradox will shape not just OpenAI's trajectory but the broader AI development landscape. If the PBC structure successfully balances innovation with governance, it validates a new model for developing transformative technologies. If it fails, future efforts might revert to traditional corporate structures or pure research institutions.

Early indicators suggest mixed results. Some researchers appreciate the mission-driven environment and long-term thinking. Others chafe at increased oversight and stakeholder management. The true test will come when the structure faces its first major crisis—a safety incident, competitive threat, or regulatory challenge that forces difficult trade-offs between competing objectives.

The Distribution of Tomorrow

OpenAI's restructuring doesn't definitively answer whether AI power will concentrate or diffuse—it does both simultaneously. The nonprofit retains control whilst reducing Microsoft's influence. The company raises more capital whilst accepting public benefit obligations. Competition intensifies whilst barriers to entry increase.

This ambiguity might be the restructuring's greatest strength. Rather than committing to a single model, it preserves flexibility for an uncertain future. The PBC structure can evolve with circumstances, tightening or loosening various constraints as experience accumulates. The nonprofit's enhanced resources create options for addressing problems that haven't yet emerged.

The $100 billion stake for the nonprofit creates a fascinating experiment in technology philanthropy. If successful, it might inspire similar structures for other transformative technologies. Quantum computing, biotechnology, and nanotechnology all face governance challenges that traditional corporate structures handle poorly. The OpenAI model could provide a template for mission-driven development of powerful technologies.

If it fails, the consequences extend far beyond one company's governance. Failure might discredit hybrid structures, pushing future AI development toward pure commercial models or state control. The stakes of this experiment reach beyond OpenAI to the broader question of how humanity governs its most powerful tools.

Ultimately, the restructuring's success depends on factors beyond corporate structure. Technical breakthroughs, competitive dynamics, regulatory responses, and societal choices will shape outcomes more than board composition or equity stakes. The structure creates possibilities; human decisions determine realities.

As Bret Taylor navigates these complexities from his conference room overlooking San Francisco Bay, he's not just restructuring a company—he's designing a framework for humanity's relationship with its most powerful tools. The stakes couldn't be higher, the challenges more complex, or the implications more profound.

Whether power concentrates or diffuses might be the wrong question. The right question is whether humanity maintains meaningful control over artificial intelligence's development and deployment. OpenAI's restructuring offers one answer, imperfect but thoughtful, ambitious but constrained, idealistic but pragmatic.

In the end, the restructuring succeeds not by solving AI governance but by advancing the conversation. It demonstrates that alternative structures are possible, that commercial and social objectives can coexist, and that even the most powerful technologies must account for human values.

The chess match continues, with moves and countermoves shaping AI's trajectory. OpenAI's restructuring represents a critical gambit, sacrificing simplicity for nuance, clarity for flexibility, and traditional corporate structure for something unprecedented. Whether this gambit succeeds will determine not just one company's fate but potentially the trajectory of human civilisation's most transformative technology.

As autumn 2025 deepens into winter, the AI industry watches, waits, and adapts. The restructuring's reverberations will take years to fully manifest. But already, it has shifted the conversation from whether AI needs governance to how that governance should function. In that shift lies perhaps its greatest contribution—not providing final answers but asking better questions about power, purpose, and the price of progress in the age of artificial intelligence.


References and Further Information

California Attorney General Rob Bonta and Delaware Attorney General Kathy Jennings. “Review of OpenAI's Proposed Financial and Governance Changes.” September 2025.

CNBC. “OpenAI says nonprofit parent will own equity stake in company of over $100 billion.” 11 September 2025.

Bloomberg. “OpenAI Realignment to Give Nonprofit Over $100 Billion Stake.” 11 September 2025.

Altman, Sam. “Letter to OpenAI Employees on Restructuring.” OpenAI, May 2025.

Taylor, Bret. “Statement on OpenAI's Structure.” OpenAI Board of Directors, September 2025.

Future of Life Institute. “2025 AI Safety Index.” Summer 2025.

Amodei, Dario. “Op-Ed on AI Regulation.” The New York Times, 2025.

TechCrunch. “OpenAI expects to cut share of revenue it pays Microsoft by 2030.” May 2025.

Axios. “OpenAI chairman Bret Taylor wrestles with company's future.” December 2024.

Microsoft. “Microsoft and OpenAI evolve partnership to drive the next phase of AI.” Official Microsoft Blog, 21 January 2025.

Fortune. “Sam Altman told OpenAI staff the company's non-profit corporate structure will change next year.” 13 September 2024.

CNN Business. “OpenAI to remain under non-profit control in change of restructuring plans.” 5 May 2025.

The Information. “OpenAI to share 8% of its revenue with Microsoft, partners.” 2025.

OpenAI. “Our Structure.” OpenAI Official Website, 2025.

OpenAI. “Why Our Structure Must Evolve to Advance Our Mission.” OpenAI Blog, 2025.

Anthropic. “Activating AI Safety Level 3 Protections.” Anthropic Blog, 2025.

Leike, Jan. “Why I'm leaving OpenAI.” Personal blog post, May 2024.

Nadella, Satya. “Partnership Evolution in the AI Era.” Microsoft Investor Relations, 2025.

Zuckerberg, Mark. “Building Open AI for Everyone.” Meta Newsroom, 2025.

China State Council. “Global AI Governance Action Plan.” World AI Conference, July 2025.

European Union. “AI Act Implementation Guidelines for General-Purpose Models.” August 2025.

United Nations General Assembly. “Resolution A/78/L.49: Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development.” 2025.

Vance, JD. “America's AI Leadership Strategy.” Vice Presidential remarks, 2025.

Advanced AI Governance Research Community. “Literature Review of Problems, Options and Solutions.” law-ai.org, 2025.


Tim Green

Tim Green UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0000-0002-0156-9795 Email: tim@smarterarticles.co.uk

Discuss...

#HumanInTheLoop #AIGovernance #PowerShift #TechEconomics