Not a New Deal: Why OpenAI Cannot Write the Social Contract

On 6 April 2026, OpenAI dropped a thirteen-page document into the middle of an already feverish policy conversation and called it a starting point. Its title, “Industrial Policy for the Intelligence Age: Ideas to keep people first,” carried the hush of something self-consciously historic. Sam Altman, the company's chief executive, took to the airwaves and to his preferred medium of long, declarative blog posts to argue that the moment now demanded a new social contract on the scale of the Progressive Era and the New Deal. The proposals inside were the kind of ideas that, only a few years ago, would have made any Silicon Valley boardroom shudder. Robot taxes. A nationally managed public wealth fund seeded in part by AI companies themselves. Auto-triggering safety nets that activate when displacement metrics cross preset thresholds. A four-day work week financed by efficiency dividends. A reorientation of the federal tax base away from payroll and toward capital gains and corporate income, on the grounds that AI will hollow out the wages that fund Social Security.
It is, on its face, an extraordinary set of admissions. The company that has done more than any other to accelerate the present wave of labour disruption is now publicly conceding that the disruption is real, that it is large, that it cannot be left to the market to absorb, and that the welfare state as currently constituted will not survive the next decade without significant intervention. Coming from a firm valued at multiples that depend on continuing to deploy precisely the systems causing the disruption, the document reads less like a policy white paper and more like a confession with a list of conditions attached.
The Axios newsletter that broke the story gave it a fitting name. Behind the curtain, this was Sam's superintelligence New Deal. The framing matters. Franklin Roosevelt's New Deal was negotiated by an elected president and a Congress responding to a Great Depression that no private actor had volunteered to fix. The terms were set by the public, through its representatives, and imposed upon capital. Altman's New Deal arrives in a different order. Capital is at the table first. The terms are being drafted by the entity with the most to gain from a particular shape of settlement. The public, in this telling, is invited to refine, challenge, or choose among the proposals through what OpenAI describes as the democratic process.
Which raises the question that the document itself cannot answer. When the company engineering the disruption is also authoring the response, is the social contract that emerges meaningfully different from one negotiated by the public it affects? And if it is different, in what direction does the difference run?
The Document Itself
The blueprint sets out three stated goals. Distributing the prosperity of AI-driven growth broadly. Mitigating the risks associated with superintelligence. Democratising access to AI systems and to the broader AI economy. Each is the kind of phrase that has appeared in industry governance literature since ChatGPT's launch in November 2022, and each has the soft, familiar texture of a press release that has been workshopped through several rounds of communications review.
The mechanisms proposed underneath are sharper. The public wealth fund would give every American citizen a direct stake in AI-driven economic growth through a nationally managed vehicle that could invest in diversified, long-term assets capturing growth in both AI companies and the broader set of firms adopting and deploying AI. Seed capital would come, in part, from AI companies themselves. The automation taxes are described as taxes related to automated labour, with the explicit acknowledgement that the existing payroll-based revenue base cannot survive a transition to capital-intensive production. The auto-triggering safety net would scale unemployment benefits, wage insurance, and cash assistance upward as displacement indicators worsen, then phase the supports out as conditions stabilise. The four-day work week is presented not as a mandate but as a framework for employers and unions to use efficiency dividends to compress hours without compressing pay.
There are also sections on cyber and biological risks, which Altman has cited as the two most immediate threats from advanced systems, and on the need for a national industrial strategy to keep frontier model development inside the United States. These sit slightly oddly next to the labour and welfare proposals, although they share a common architecture. They are framed as urgent, as inevitable, and as requiring significant public investment in a direction that happens to align with OpenAI's commercial interests.
That alignment is not necessarily a mark against the substance of any individual proposal. A public wealth fund is a serious idea with a long intellectual history, from Norway's sovereign wealth model to the Alaska Permanent Fund to the academic work of economists like Anthony Atkinson. A four-day work week has been trialled in the United Kingdom, Iceland, and Spain with broadly positive results on productivity and worker wellbeing. Robot taxes have been debated since Bill Gates floated the idea in a 2017 interview with Quartz. Auto-triggering fiscal supports were a central feature of pandemic-era proposals from economists across the political spectrum. None of this is invented from nothing, and the document is careful to nod toward the lineage.
What is new is the source. These ideas, when they have appeared in the policy literature before, have come from think tanks, academics, trade unions, and the political left. They have not, as a rule, come from the firms whose business models would be most directly taxed by them. The sight of OpenAI publishing a blueprint that asks for higher capital gains taxes on people like Altman himself is genuinely unusual. Fortune drew the obvious comparison to JPMorgan Chase chief executive Jamie Dimon, who has periodically called for higher taxes on the wealthy as part of a broader argument about social stability. The intellectual honesty in both cases is real. So is the strategic logic.
The Strategic Logic of Pre-emptive Reform
There is a long tradition in political economy of capital-intensive industries authoring the rules that govern them. Standard Oil did it with the Interstate Commerce Commission. The major broadcasters did it with the Federal Communications Commission. Wall Street did it with vast tracts of the Dodd-Frank legislation. The pattern is well documented in the regulatory capture literature, most influentially by the late economist George Stigler in the 1970s, and the rationale is straightforward. When disruption is coming for an industry, or when the industry is causing disruption that threatens to provoke a public backlash, it is far better to be inside the room where the response is being drafted than to be the subject of someone else's draft.
OpenAI's blueprint fits this pattern with unusual precision. The labour disruption that Altman is now publicly acknowledging is not a hypothetical. It is already showing up in entry-level white-collar hiring data, in the contraction of contract translation work, in the restructuring of customer service operations, in the visible distress of junior coders and graphic designers and copywriters whose work has been automated faster than the labour market can absorb the displacement. By 2026 the political pressure for some form of response was already building. Unions had begun organising around AI displacement clauses in collective agreements. State legislatures had introduced bills targeting automated decision systems in hiring, lending, and benefits adjudication. The European Union had passed and then partially walked back, through the Digital Omnibus, several sections of the AI Act under industry pressure. The political ground was moving, and the question for any frontier AI lab was no longer whether there would be a regulatory response but what shape it would take.
In that context, getting in front of the conversation with a comprehensive blueprint is exactly what a sophisticated political operator would do. The document does several things at once. It signals seriousness, which inoculates against accusations of indifference. It frames the problem in terms that the company can live with, particularly the assumption that the underlying technology will continue to be developed and deployed at the current pace by the current players. It offers concessions on tax and welfare that are real but bounded, and that can be negotiated downward as the legislative process unfolds. It positions Altman personally as a statesman rather than a technologist, which has been a consistent feature of his public posture since the Senate testimony of May 2023. And it shifts the burden of proof onto critics who must now explain why the company's preferred solutions are insufficient, rather than arguing from scratch about whether any solutions are needed at all.
The critics noticed. Within hours of the blueprint's release, several prominent voices in AI policy were arguing that the document was a sophisticated exercise in what one called regulatory nihilism. The phrase, picked up by Fortune in its coverage, captures a particular concern. By proposing a vast and ambitious package of reforms that would require years of political work to enact, OpenAI was effectively pushing the response off into the indefinite future while continuing to deploy systems whose effects would compound in the meantime. The blueprint's own language about being a starting point for discussion was, in this reading, a way of ensuring that the discussion never quite reached a conclusion.
There is a more charitable interpretation, and it deserves to be taken seriously. Altman and his colleagues may genuinely believe that the labour transition ahead is severe enough to require something like the New Deal, and that the political system as currently constituted is unlikely to produce such a response without significant prompting from the companies closest to the technology. On this reading, the blueprint is an attempt to use the company's platform and credibility to move a conversation that would otherwise drift. That this also happens to align with OpenAI's commercial interests is a feature, not a bug, because the alignment is what makes the proposal credible to other actors in the room. A blueprint authored by a hostile party could be dismissed. A blueprint authored by the company being asked to pay the new taxes is harder to ignore.
Both interpretations can be true at the same time. The history of progressive reform is full of cases where commercial self-interest and public interest converged on the same policy, and where the resulting legislation was better than either could have produced alone. The New Deal itself was negotiated with significant input from sympathetic capitalists who saw stabilisation as essential to their long-term interests. The question is not whether private interest is involved in public policy, because it always is, but whether the structure of the conversation allows other interests to enter on equal terms.
Who Is Not in the Room
This is where the analogy to the historical New Deal begins to strain. Roosevelt's coalition was assembled from organised labour, urban political machines, agrarian populists, civil rights activists, social workers, and reform-minded intellectuals as well as sympathetic business figures. The Wagner Act, which guaranteed the right to organise, was fought through Congress over the explicit objections of most of American industry. The Social Security Act was drafted by a committee that included the labour secretary Frances Perkins, the first woman to hold a cabinet position, and her staff of social insurance experts, many of whom had spent their careers studying European welfare systems. The terms were set by the public side of the negotiation and the private side accepted them because the alternative, in the depths of the Depression, was worse.
The OpenAI blueprint enters a very different room. There is no equivalent labour movement at the table, because the workers most affected by AI displacement are scattered across freelance markets and white-collar professions that have historically been weakly organised. There is no equivalent agrarian populism, although there are stirrings of an anti-AI politics in rural and small-town America driven by data centre siting disputes and energy costs. There is no Frances Perkins, no figure inside the federal government with both the expertise and the political authority to draft an alternative blueprint from the public side. The Biden-era executive order on AI was rescinded in January 2025. The current administration's approach has been characterised by a mix of industrial policy support for domestic frontier labs and a general scepticism of regulation. State-level initiatives like California's SB 53 have faced what critics have described as intimidation campaigns from industry, including, by some accounts, from OpenAI itself.
Into that vacuum, the blueprint arrives with the structural advantage of being the only fully developed document in the room. Other actors will respond, and the response will shape the eventual outcome, but they will be responding to a frame that OpenAI has already set. The choice of which proposals to discuss, which mechanisms to specify, which thresholds to use for the auto-triggering safety net, which assets to include in the public wealth fund, all of these have been pre-decided in ways that will be very difficult to undo as the conversation moves forward. This is the agenda-setting power that political scientists have studied for decades, and it is one of the most consequential forms of influence in any policy debate. The party that writes the first draft almost always wins more than the party that responds to it.
The democratic process to which OpenAI defers is not, in this context, a neutral arbiter. It is a political system in which lobbying spending by AI firms has roughly tripled since 2023, in which several former OpenAI employees now hold senior positions at the National Institute of Standards and Technology and the AI Safety Institute, in which the trade press is heavily dependent on access to frontier labs for the scoops that drive its business model, and in which the public's attention is fragmented across a hundred competing crises. In such a system, the actor with the most resources, the clearest message, and the earliest draft will tend to win, regardless of the merits of the underlying proposals. The blueprint's appeal to democratic deliberation is sincere in tone and structurally favourable to its author in effect.
The Substance of the Proposals
It is worth pausing on the proposals themselves, because the tendency to focus on the politics of who is speaking can obscure the question of whether what is being said is any good. Taken individually, the elements of the blueprint range from reasonable to genuinely impressive.
The public wealth fund is the most interesting. The Norwegian Government Pension Fund Global, often cited as the model, was built from oil revenues and now owns roughly 1.5 per cent of every listed company in the world, generating dividends that fund a significant portion of Norwegian public spending. The Alaska Permanent Fund pays an annual dividend to every Alaskan resident from the state's oil and mineral revenues. Both have endured across multiple political cycles and across changes of government. A US version seeded by AI companies would face significant constitutional and structural questions about taxing authority, about how the fund's investments would be governed, about whether the dividends would be paid as cash or held in trust, and about how the fund would avoid becoming a vehicle for political patronage. None of these questions is unanswerable, and the existence of working models elsewhere demonstrates that the basic concept is feasible. The blueprint is vague on the specifics, which is both a weakness and a strength. The vagueness leaves room for negotiation, and it also leaves room for the proposal to be hollowed out in implementation.
The automation tax is more contested. Economists are divided on whether taxing capital substitution for labour is an efficient way to fund welfare or whether it distorts investment in counterproductive ways. A 2017 analysis by the European Parliament's legal affairs committee proposed and then dropped a robot tax after concluding that it would be administratively complex and economically uncertain. The South Korean government has effectively implemented a soft version by reducing tax incentives for automation investment. The blueprint's framing in terms of taxes related to automated labour is loose enough to encompass several possible designs, from a direct levy on revenue produced by automated systems to a broader shift in the tax base toward capital gains. The latter is the more economically defensible approach and the one that several mainstream economists, including the late Atkinson and the more recent work of Daron Acemoglu and Pascual Restrepo at MIT, have argued for in the context of AI displacement.
The auto-triggering safety net is the proposal closest to existing welfare state design. Several countries already have automatic stabilisers that scale unemployment benefits with macroeconomic conditions. The novelty in the blueprint is the proposal to use AI displacement metrics, rather than general unemployment, as the trigger. This raises a thorny measurement problem. There is no agreed-upon way to attribute job losses to AI specifically, as opposed to broader economic conditions, offshoring, demographic change, or business cycle effects. The Bureau of Labor Statistics has been working on improved measures, and academic work by economists at the Brookings Institution and the International Labour Organization has proposed several methodologies, but none is yet robust enough to serve as a legal trigger for benefit increases. The blueprint glosses over this difficulty.
The four-day work week is the most popular proposal in opinion polling and the most difficult to implement in practice. The 4 Day Week Global trials run in the United Kingdom in 2022 and 2023 reported productivity gains and worker satisfaction improvements, and similar pilots in Iceland from 2015 to 2019 produced comparable results. The challenge is that compressing hours without compressing pay requires either productivity gains large enough to absorb the cost or employer willingness to accept lower margins. The blueprint's framing in terms of efficiency dividends is a bet that AI productivity gains will be large enough to make the math work. Whether they are, and whether the gains will be shared with workers rather than captured by capital, is precisely the question that the rest of the blueprint is trying to address. There is a circularity here that the document does not quite acknowledge.
Taken together, the substance is serious. A version of this blueprint produced by a left-leaning think tank would be celebrated as a comprehensive progressive vision. The fact that it is being produced by OpenAI does not make the substance worse. It does, however, change what the substance means.
The Meaning of a Privately Authored Social Contract
A social contract, in the tradition that runs from Hobbes through Locke and Rousseau to John Rawls, is not primarily a set of policies. It is a story about legitimacy. It explains why the people governed by a particular set of institutions accept those institutions as binding upon them. The classical answer is that they accept the institutions because they would have agreed to them under fair conditions of deliberation, behind what Rawls called the veil of ignorance, where no one knew in advance which position they would occupy in the resulting society. The legitimacy of the contract depends on the fairness of the process by which it was negotiated.
A blueprint authored by a private company and offered for public ratification is a different kind of object. It may contain perfectly sensible policies. It may even be more progressive than what the political system would produce on its own. But it cannot, by its nature, satisfy the legitimacy criterion that the social contract tradition requires, because the process by which it was produced was not one of fair deliberation among equals. It was one in which a single actor, with enormous resources and a direct stake in the outcome, sat down and wrote what it thought the response should be, and then invited everyone else to react.
This matters even if the resulting policies are good. The legitimacy of welfare state institutions in the twentieth century rested in significant part on the fact that they were won through political struggle by the people who would benefit from them. The Wagner Act was legitimate because workers fought for it. The National Health Service in the United Kingdom was legitimate because it was the product of a Labour government elected on a manifesto that promised it. Social Security was legitimate because it was passed by a Congress responding to mass unemployment and political mobilisation. When the beneficiaries are the authors, the institutions feel like theirs. When they are the recipients of someone else's plan, even a generous one, the relationship is different. It is closer to charity than to right.
There is also a more practical concern. A social contract written by a private company can be revised by that company at will. It is not embedded in democratic institutions in a way that constrains future behaviour. If OpenAI's commercial interests change, or if the political climate shifts, the blueprint can be quietly walked back, the proposed taxes can be diluted, the safety nets can be conditioned on requirements that the company finds acceptable. The history of corporate social responsibility commitments is full of such revisions. The Business Roundtable's 2019 statement on the purpose of the corporation, which committed signatory chief executives to consider stakeholders beyond shareholders, has been studied extensively in the years since, and a 2022 paper by law professors Lucian Bebchuk and Roberto Tallarita at Harvard found little evidence that the signatories had actually changed their behaviour. Voluntary commitments from powerful actors tend to remain voluntary in practice, even when they are framed as binding in principle.
The OpenAI blueprint is not, formally speaking, a commitment at all. It is a set of recommendations addressed to policymakers. But the framing is such that the company gets credit for the proposals regardless of whether they are enacted. If they are enacted, OpenAI can claim authorship. If they are not enacted, OpenAI can claim that it tried, and that the failure lies with the political system. Either way, the company has shifted the moral terrain in its favour without taking on any actual obligation. The asymmetry is structural and difficult to reverse.
What a Public-Side Response Would Look Like
It is easy to criticise the blueprint and harder to say what a more legitimate process would produce. But the outlines are not impossible to sketch. A public-side response would begin with the question of who should be at the table and would expand the conversation accordingly. It would include trade unions, particularly the new generation of unions organising in tech, retail, and platform-mediated work. It would include civil society organisations that have been working on welfare state reform for decades. It would include academic economists across the ideological spectrum, not just those whose work is congenial to the AI industry. It would include representatives of the workers whose labour is being displaced, in forums designed to give them meaningful voice rather than ceremonial input. It would include international perspectives, given that the labour disruption is global and the policy responses in Europe and Asia are already further developed than in the United States.
It would also start from a different question. Rather than asking how to manage the transition that the AI companies are creating, it would ask what kind of transition the public actually wants, and at what pace, and with what safeguards. The answers might converge on some of the same proposals that the OpenAI blueprint contains. Or they might not. They might include more restrictive measures, such as mandatory disclosure of AI use in employment decisions, or moratoria on the deployment of certain systems in sensitive sectors, or stronger collective bargaining rights for workers in AI-exposed industries. They might include proposals that the blueprint does not contain, such as public ownership of frontier model training infrastructure, or mandatory licensing of foundation models on terms set by public authorities, or international treaties on the labour effects of AI deployment.
The point is not that any particular alternative is necessarily better. The point is that the deliberative process matters, and that a process in which the affected parties have genuine power to shape the outcome produces different results than one in which they are presented with a finished document and asked to react. Democratic legitimacy is not a property of policies. It is a property of the process by which policies are made.
The OpenAI blueprint, for all its sophistication and all its substantive merits, is the product of a process that does not meet that standard. It is closer to a corporate prospectus than to a constitutional moment. The use of New Deal language is not accidental. It is an attempt to borrow the legitimacy of a historical settlement that was won by very different means, and to apply it to a present settlement that is being authored on very different terms.
The Asymmetry That Will Not Resolve Itself
None of this is to say that OpenAI should not have published the blueprint, or that Altman is wrong to argue for the proposals it contains, or that the substance is not worth taking seriously. The document is a meaningful contribution to a conversation that needed to happen, and the company deserves some credit for being willing to put taxation of itself on the agenda. The criticism is not about intent. It is about structure.
The structural problem is that the actors who have the most information about what AI systems can do, the most capacity to model their effects, and the most resources to shape the policy response are the same actors whose commercial success depends on a particular shape of that response. There is no way to remove this conflict of interest without either nationalising the industry, which is not on the political horizon in any major economy, or building public capacity to match the private capacity, which would require sustained investment in regulatory expertise, academic research, and civil society infrastructure of a kind that has not been seen in the United States since the 1970s. Neither option is immediately available, which means that the conversation will continue to be shaped, for the foreseeable future, by documents like the OpenAI blueprint.
What can be done in the meantime is to be honest about what is happening. The blueprint is not a neutral contribution to a deliberative process. It is a strategic intervention by a powerful actor with a direct stake in the outcome. Treating it with the seriousness its substance deserves does not require pretending that the politics are anything other than what they are. A social contract negotiated by a private company is meaningfully different from one negotiated by the public it affects, not because the private actor is necessarily acting in bad faith, but because the conditions of fair deliberation are not met when one party writes the first draft and the others are asked to react.
The question, then, is not whether to engage with the blueprint. It is whether to engage with it as a final document or as a provocation. Treated as a final document, it threatens to lock in a particular framing of the AI labour transition that will be very difficult to revise later. Treated as a provocation, it could be the starting point for a much broader conversation in which the affected parties get a real seat at the table and the policies that emerge carry the legitimacy that comes from genuine democratic authorship. Which of these two things it becomes will depend less on the content of the blueprint itself than on whether other actors have the capacity and the will to mount a serious response.
So far, the signs are mixed. Trade unions have begun to organise around AI displacement, but they are starting from a weak position in the white-collar sectors most affected. Academic economists are producing important work, but it is fragmented and underfunded relative to industry-sponsored research. State legislatures are experimenting, but they are vulnerable to pre-emption by federal law. Civil society organisations are engaged, but their resources are tiny compared to the lobbying capacity of the major AI firms. The European Union has the regulatory capacity, but the Digital Omnibus has shown that even that capacity can be rolled back under sufficient industry pressure.
The blueprint, in this context, looks less like a New Deal and more like a new equilibrium. It is the moment at which the AI industry, having produced a labour disruption that it could not deny, moved to author the terms of the response. Whether that response becomes a genuine social contract or a managed concession will depend on whether the rest of the political system can rouse itself to insist on something more. The democratic process to which OpenAI defers is the only mechanism that can produce a different outcome, and it is precisely the mechanism that has been weakened by decades of corporate consolidation, declining union membership, regulatory capture, and the fragmentation of public attention. The blueprint is an artefact of that weakness as much as it is a response to the technology it describes.
History will record what happens next. The current moment may be remembered as the beginning of a new social settlement, comparable in scale to the one Altman invokes. Or it may be remembered as the moment when the language of the New Deal was borrowed by the very actors that the original New Deal was designed to constrain, and used to legitimate a settlement that the public had no real hand in writing. The difference between these two outcomes is not a matter of policy substance. It is a matter of who is in the room, who holds the pen, and whether the process by which the contract is negotiated is one that the people governed by it can recognise as their own.
For now, the pen is in Altman's hand. The room is the one that OpenAI has built. And the contract on the table is the one the company has written. The democratic process is being invited to refine, challenge, or choose among the options provided. Whether it will do anything more than that is the question that the next several years will answer.
References & Sources
- Altman, S. and OpenAI. “Industrial Policy for the Intelligence Age: Ideas to Keep People First.” OpenAI, 6 April 2026. https://openai.com/index/industrial-policy-for-the-intelligence-age/
- OpenAI. “Industrial Policy for the Intelligence Age” (full PDF). https://cdn.openai.com/pdf/561e7512-253e-424b-9734-ef4098440601/Industrial%20Policy%20for%20the%20Intelligence%20Age.pdf
- Allen, M. “Behind the Curtain: Sam's Superintelligence New Deal.” Axios, 6 April 2026. https://www.axios.com/2026/04/06/behind-the-curtain-sams-superintelligence-new-deal
- The Hill. “OpenAI's Sam Altman Releases Blueprint for Taxing, Regulating Artificial Intelligence.” 6 April 2026. https://thehill.com/policy/technology/5817906-openai-ai-policy-recommendations/
- TechCrunch. “OpenAI's Vision for the AI Economy: Public Wealth Funds, Robot Taxes, and a Four-Day Workweek.” 6 April 2026. https://techcrunch.com/2026/04/06/openais-vision-for-the-ai-economy-public-wealth-funds-robot-taxes-and-a-four-day-work-week/
- Fortune. “Sam Altman Says AI Superintelligence Is So Big That We Need a 'New Deal.' Critics Say OpenAI's Policy Ideas Are a Cover for 'Regulatory Nihilism.'” 6 April 2026. https://fortune.com/2026/04/06/sam-altman-says-ai-superintelligence-is-so-big-that-we-need-a-new-deal-critics-say-openais-policy-ideas-are-a-cover-for-regulatory-nihilism/
- Fortune. “Sam Altman's Big Pitch to Fix the Big AI Mess Sounds Like Jamie Dimon's.” 6 April 2026. https://fortune.com/2026/04/06/sam-altmans-capital-gains-taxes-4-day-workweek/
- Newsweek. “Sam Altman Proposes Robot Tax as American Economy Transforms.” 6 April 2026. https://www.newsweek.com/sam-altman-proposes-robot-tax-as-american-economy-transforms-11788200
- Decrypt. “OpenAI Calls for Global Shift in Taxation, Labor Policy as AI Takes Over.” 6 April 2026. https://decrypt.co/363431/openai-global-shift-labor-taxation-ai-sam-altman
- The Next Web. “OpenAI Calls for Robot Taxes, a Public Wealth Fund, and a Four-Day Week.” 6 April 2026. https://thenextweb.com/news/openai-robot-taxes-wealth-fund-superintelligence-policy
- The Tech Portal. “OpenAI Proposes AI Driven Economic Change Including Robot Taxes, Public Wealth Funds and a Four Day Work Week.” 6 April 2026. https://thetechportal.com/2026/04/06/openai-proposes-ai-driven-economic-change-including-robot-taxes-public-wealth-funds-and-a-four-day-work-week
- eMarketer. “OpenAI Moves to Shape AI Policy Debate.” 6 April 2026. https://www.emarketer.com/content/openai-moves-shape-ai-policy-debate
- Stigler, G. J. “The Theory of Economic Regulation.” Bell Journal of Economics and Management Science, 1971.
- Bebchuk, L. A. and Tallarita, R. “The Illusory Promise of Stakeholder Governance.” Cornell Law Review, 2020, with follow-up empirical work published 2022.
- Acemoglu, D. and Restrepo, P. “Robots and Jobs: Evidence from US Labor Markets.” Journal of Political Economy, 2020.
- Atkinson, A. B. “Inequality: What Can Be Done?” Harvard University Press, 2015.
- 4 Day Week Global. UK Pilot Programme Results, 2023. https://www.4dayweek.com/
- Norwegian Government Pension Fund Global, Norges Bank Investment Management public reporting. https://www.nbim.no/
- Alaska Permanent Fund Corporation public reporting. https://apfc.org/
- European Parliament Committee on Legal Affairs. Report on Civil Law Rules on Robotics, 2017.
- Gates, B. Interview with Quartz on robot taxation, February 2017.

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk