Power in the Intelligence Age
Governments and frontier AI labs are on a collision course
Welcome to this update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If you’re reading this and haven’t yet subscribed, join 30,000+ curious souls on a journey to build a better future 🚀🔮
Earlier this month, OpenAI published an intriguing statement. It was called Industrial Policy for the Intelligence Age.
In this report, the company lays out a vision for how we should reorganise our economies around the arrival of superintelligence. The authors float a number of ideas, including higher taxes on corporate profits, and a four-day work week. They also suggest a new Public Wealth Fund — seeded with capital from government and the big AI companies — that would invest in the AI growth story and distribute the returns directly to citizens. Essentially, then, a form of universal basic income.
Step back for a moment, and notice what is happening here. The world’s most prominent AI company is saying, essentially, ‘our technology will prove so transformative that we’ll need to rewire capitalism.’ In an interview around the launch of the report, CEO Sam Altman framed it as a ‘New Deal moment for America’.
And mostly, the world shrugged and moved on.
OpenAI aren’t the only ones walking into this kind of territory. Anthropic are also hiring economists and researchers whose entire job it will be to figure out how superintelligence will reshape the broader economy.
A few thoughts on all this.
First, the OpenAI report contains some proposals we should take seriously. I’ve been writing for a while now about what I call post-human economics. The core idea is much in alignment with all this. It is that we’re heading towards a radically new kind of economy. One in which machines do much of the work that has, until now, been the province of human beings. And one in which the old metrics we use to understand economic life fall into incoherence. When humanoid robots can do much of the physical labour currently done by people, and when billions of AI agents transact with one another every day, what does GDP really measure? My answer, in short, is nothing meaningful.
It’s hard for us to see the outlines of what comes next. But one thing seems clear to me. We are going to need radically new social and economic forms to allow human beings to flourish inside what is coming. The institutions we have — the tax systems, the welfare states, the labour markets — were built for an economy of human inputs and conditions of material and labour scarcity. That world is ending. Not tomorrow, but soon enough that we need to start thinking about it.
This is why I find the most radical of OpenAI’s proposals — the Public Wealth Fund — genuinely interesting. Sure, it’s not a solution to everything that is coming, or anything close to that. But it gestures in a helpful direction. If machines owned by an infinitesimally small number of people are going to generate vast levels of wealth, then we’ll need mechanisms to share that abundance more widely. I just can’t see a way around that.
And there’s a huge opportunity here. It is that the machines, combined with some new form of redistribution, can liberate people to do forms of work that the market has never valued. Care work, above all; looking after children, and looking after the elderly, of whom we have many and will soon have many more. These have always been among the most important things human beings do. In the economy that is coming, we might finally be pay attention to these kinds of work.
But my second, and more important, thought: we shouldn’t leave it to a few giant technology companies to tell us all how this goes. We need more research, led by independent people. If you want to read beyond OpenAI’s recent report, then check out The Digitalist Papers Volume 2. This series of essays was convened by Erik Brynjolfsson of Stanford University: the world’s leading expert on the collision of emerging digital technologies and the economy. As it sounds, Brynjolfsson was inspired by The Federalist Papers. He is an Alexander Hamilton for the AI age.
Slowly, then, mainstream economics is catching up with the world it purports to study. But in the end, the questions we face now can’t be settled by frontier AI labs, or even by academic economists. They must be settled by us. These questions are not technocratic in their nature. They are political.
So the urgent challenge we face now is to fire up a broader democratic exercise around the questions posed by superintelligence. So far, mainstream politics has said little of any value on the subject. But the Trump administration’s acrimonious dealings with Anthropic — which saw them fall out over military use of Anthropic’s models — is an early signal of a far deeper entanglement to come. Frontier AI labs and government are going to wrestle with one another for control of a epoch-shaping technology. And there is going to be an almighty argument over the new social and economic realities we should build around it.
The AI labs have done nothing to avoid this power struggle. In fact, they’ve done everything to bring it upon themselves. You can’t build a technology that you claim will lead to mass unemployment — a technology that you explicitly compare to nuclear weapons — and expect the governments of the Global North to just sit back and see how it all plays out.
Politics is coming to the Intelligence Age. We must raise our voice; we should Not Throw Away Our Shot to shape what is coming.
It’s going to be a wild ride. I’ll be back next week,
David.
This was #26 in the series Postcards from the New World, from NWSH. The title artwork is The Pond (1950) by LS Lowry.


