Imagine you wake up one day in the glorious techno-abundant future, powered by AI. You eagerly check your subscriber count on Substack, but to your dismay, it has fallen once again. There are now AI-generated blogs for every interest, and it’s so hard to find a niche that isn’t taken. Your obsessive focus on the history and culture of lava lamps has only won you three paid subscribers, and that includes your mom and your old college roommate.
Well, you think, ages ago I wrote for WIRED, maybe they would take a piece? You email them and instantly get a personal reply (must be an AI autoresponder). They would be happy to take submissions, and they pay $3 per word. Great, you think, that’s a good—wait. It actually says $3 per kiloword. Crap. No chance you can make a living that way.
Well, you’ve got to pay for groceries somehow. Maybe you can fall back on the gig economy for a while? You consider Uber or Lyft, but you look at what they’re offering drivers, and when you do the math, it’s pennies per hour—there are so many robotaxis on the streets now, you can always get a ride in a minute or two, we don’t really need to incentivize more humans. TaskRabbit? Same problem; there are so many robotic handymen, you can’t get more than a few dollars per task. Instacart? They’re also offering extremely low pay—but, the desperate thought creeps into your mind, maybe you could at least nibble some grapes out of the cart when no one was looking…
Does something about this picture seem strange to you? Well, it would seem to be the future envisioned in a recent article by Matthew Barnett claiming that “AGI could drive human wages below subsistence level—the bare minimum needed to sustain human life.” This post has been making the rounds and reigniting an old debate about AI taking all the jobs, which is really just the latest installment in a very old debate about technology taking all the jobs.
Sub-subsistence wages
Before I get into the substance of the argument, I want you to just react to the image conjured up by the idea of AI driving wages “below subsistence.” How realistic is it for you?
My first reaction, before I read the post, was: I have no idea what model of the world you are working from. So, all humans are basically starving, because they are thrown out of work or can only find the crappiest-paying jobs? And that’s because AI has automated all work in the economy? So… lots of economic activity is going on, lots and lots of production, but AI is doing all of it and people can’t get paid. Who… is running the AI? Presumably some corporations? Where are they getting their profits from, if there are no customers left for anything anymore? Has the economy turned completely self-referential, running itself in completely automated fashion while all humans get taken out of the loop? An ouroboros economy, eating its own tail? And how would this come about? It seems that somewhere along the incremental path to this state you’d hit some equilibrium that would prevent you from going further.
Or maybe you think that the entire economy is controlled by Sam Altman and Jensen Huang, or at least a relatively small group of ultra-wealthy owners of the few remaining companies that master AI and outcompete everyone and everything. A world full of robots creating a paradise, but it’s a gated community and only the wealthy 0.01% get to live there, trillionaires every one, while the unwashed masses starve in the streets. I’m sure some people find this plausible or even likely, but I don’t (mostly because I just don’t think this is the way an economy works, but secondarily because I would expect world governments to intervene with some sort of massive welfare program).
In any case, what doesn’t make any sense is all of humanity starving to death from unemployment. Jobs and technology have a purpose: producing the good and services we need to live and thrive. If your model of the world includes the possibility that we would create the most advanced technology the world has ever seen, and the result would be mass starvation, then I think your model is fundamentally flawed.
What the post actually said
Sometimes the headline or the social media posts can give a misleading impression of what an author is actually saying, so it’s important to read the post before having a strong take. Having read it, I feel compelled to say: it is a bad post.
I hate criticizing other authors like this. Usually I prefer to just present a counterargument. I follow Matthew online, which means I must have found him to have smart and interesting takes in the past, and the post was hosted at Epoch AI, an organization I respect. But this article has caused a stir, and seems almost designed to inflame a specific flavor of unhelpful and unrealistic doomerism, and it does so based on a combination of flawed logic and unclear writing. So this is a rare case in which I feel it necessary to criticize directly. (Matthew, if you’re reading this, I hope this can be constructive, and I invite your rebuttal.)
First, there is a crucial clarification which the post saves for the very end, almost as an afterthought, which is that its analysis only applies to human wages and not human welfare. Matthew says that he is actually optimistic about our being able to live comfortably, deriving our income not from wage labor but from investments, charity, or government welfare. This is mentioned almost offhand in the last two paragraphs, after repeated references to human wages “crashing” below “the bare minimum needed to sustain human life” or “the level required for human survival” or “a wage so low that they cannot afford food” or “the level at which humans could afford enough calories to feed themselves.” It even mentions a scenario in which food cannot be grown anywhere in the solar system because land is being used for computers running AI. To go through all of this, and then to say that you’re actually optimistic, is a bizarre way to write.
Setting that aside, here is the substance.
The post gives a mathematical argument, but you don’t really need to understand the equations to get the intuition. Essentially, the argument is: at some point AI will be good enough to substitute for a human in any job—let’s say this is our definition of AGI. There will be a very large number of AGI “workers.” They will flood the market with labor, to the point where adding more labor in the form of humans is superfluous. Sure, you could employ a human, but there are so many workers already, doing pretty much everything useful that can be done; adding one more isn’t going to produce much extra output, so the wage for that job is necessarily very low. It might make sense to spin up one more AGI worker to do the job, because that only costs a bit of electricity (plus interest and depreciation on capex), but it doesn’t make sense to pay a human even a subsistence wage to do it.
That’s an informal argument. The post actually makes a formal argument based on equations of economic growth theory (the Cobb-Douglas production function, if you know what that is, and the theory that wages are determined by the marginal product of labor). He models the introduction of AGI as a large increase in the quantity of labor in these equations, and the result is that wages crash.
The post addresses two counterarguments. One is based on the notion of “comparative advantage”: even if AI is better than humans at everything, there is a law of economics that says that production is maximized when humans do what they are relatively least-bad at, and AI does what it is most-better at. Matthew points out that this says nothing about human wages in such a scenario. I think he is correct here.
The second counterargument is that AI will not just introduce more artificial workers into the economy, it will also drive technological progress, which increases productivity and therefore wages. In the equations, there is a term that represents technology; wages fall as total labor increases, but they rise when technology increases. Matthew’s rebuttal to this is that “there are limits to technological innovation,” which has to stop sometime, and when it inevitably does, then wages will crash.
My reaction
My first reaction was to wonder, why model AI as a big increase in labor, instead of an increase in capital? In the equations, capital represents all the machines and infrastructure we use to do work; labor is the humans. AI is a new kind of tool that we use to get work done, amplifying human productivity. We invest money in assets (GPUs and software and weights on neural nets), those assets help us do work—that’s the essence of capital. If capital increases instead of labor, then wages rise—that’s what has been happening throughout the entire industrial era.
The argument here is that rather than acting like a tool that we use to do work, AI will act more like a substitute for people, a new form of labor. But in the Industrial Revolution, mechanization also substituted for labor in many ways. The power loom substituted for weavers, and the spinning machine for spinners. A train hauling cargo employed far fewer people than the equivalent number of wagons. Electric street lights put lamp lighters out of work, and containerization obsoleted the longshoremen.
So why didn’t wages decrease through the industrial age? Why do our economic models have separate terms for labor and capital, and why does adding capital increase wages whereas adding labor decreases them?
For one, capital has so far only partially substituted for labor. In many cases, people are still needed to operate the new machines: factory workers, truck drivers, etc. For another, automation creates lower prices and/or higher quality, which increases demand: far more cargo was sent by train than by wagon, and far more again was sent after containerization. Finally, new technologies create entirely new markets and industries, which create new jobs and new kinds of jobs. Two million people work for Walmart alone, but big-box retail didn’t exist a century ago. Everyone making automobiles or plastic or video games is in an industry that didn’t exist in 1900.
The argument for why “this time it’s different” is that AI might fully substitute for labor. So it’s not like a power loom, which still needs a human operator; it’s more like automatic telephone switching systems, which simply eliminated human telephone operator jobs. If AI or other new technology creates new jobs in new sectors, well, AI will also take those jobs, before humans even get a crack at them. And all of this will happen far faster than it did in the past, so people won’t get a chance to adapt. If your job gets eliminated by AI, you won’t even have time to reskill for a new job before AI takes that one too.
What I expect to happen
In engineering, everything happens iteratively; in economics, everything happens at the margin. Progress is spiky, or lumpy. The future arrives unevenly distributed.
AI will not fully substitute for all labor all at once:
Some jobs will simply be easier to automate with AI than others
In particular, some jobs have lower tolerance for mistakes, and it will take longer to achieve the standard of reliability they require
Some jobs have a physical labor component, and advanced robotics will arrive later than virtual “remote worker” AIs
Some jobs have a premium on human interaction
Some jobs are protected by licensing requirements, labor unions, etc.
For instance, I expect that we will have human doctors for quite some time. This job checks three of the above boxes: a licensed profession, with a low tolerance for mistakes, and a premium on human interaction. This is true even if AI becomes the first line of medical advice and diagnosis (replacing nurse hotlines today), and even if what doctors are doing is mostly asking AI for advice and then vetting—or rubber-stamping—the answers.
On the other end of the spectrum, I expect that call centers doing customer service will soon go away. AI will do this better, more reliably, and cheaper; and customer service is a cost center that consumers aren’t willing to pay for and businesses always minimize. (This will be disruptive for the call center workers, but it will be a boon for consumers—which is to say, everyone. Customer service will be smarter and more effective, it will make far fewer mistakes, it will be unfailingly polite, you won’t have to wait on hold, and you won’t have to be on the premium pricing tier to get it.)
The other thing that will happen, at the margin, is that people will work less and enjoy more leisure. If there is less total work worth doing, that doesn’t always take the form of some people being involuntarily thrown out of work. It can take the form of fewer working hours in the day, fewer working days in the week, more holidays and vacations. It can take the form of starting work later in life (more education, more “gap years”) or ending it earlier (a younger retirement age). It can take the form of more people choosing to work part time, or not at all—more stay-at-home parents to raise more kids, perhaps. All of this would simply be the continuation of trends that have been going on for a century or more.
This transition will happen faster than industrialization did: physical automation was rolling out for well over a century, whereas I expect intelligence automation to take mere decades. This means we’ll have to adapt faster. But AI will also give us the tools to adapt faster: it will accelerate any reskilling that happens, help connect talent with opportunities, and generally enhance job mobility. Even more importantly, it will accelerate the creation of new ventures and new industries.
A crucial way AI will do this is by greatly leveraging human vision and judgement—by making it cheaper, faster, and more reliable to bring ideas into reality. If you think it would be amazing to see Sherlock Holmes set in medieval Japan, or Beowulf done as a Hamilton-style hip hop musical, AI will help you create it. If you think someone should really write a history of the catalytic converter in prose worthy of The New Yorker, AI will draft it. If you think there’s a market for a new social media app where all posts are in iambic pentameter, AI will design and code the beta. If you want a kitchen gadget that combines a corkscrew with a lemon zester, AI will create the CAD files, and you can send them to a lights-out factory to deliver a prototype.
So, on the incremental path to the future, a major trend will be that humans step up a level, into management. A software engineer becomes a tech lead of a virtual team. A writer becomes an editor of a staff of virtual journalists. A researcher becomes the head of a lab of virtual scientists. Lawyers, accountants, and other professionals spend their time overseeing, directing, and correcting work rather than doing the first draft.
What happens when the AI is good enough to be the tech lead, the editor, the lab head? A few steps up the management hierarchy is the CEO. AI will empower many more people to start businesses.
You may think that most people aren’t suited to being CEOs, but the job of CEO will become much more accessible, because it will require less skill. You won’t have to recruit candidates or evaluate them; you won’t have to motivate or inspire them; you won’t have to train junior employees; if you correct a mistake they will never make it again; you’ll never catch them slacking off; you’ll never have to work around the vacations or sick days that they won’t be taking; you’ll never have to deal with low morale or someone who is sore they didn’t get a promotion; you’ll never have to mediate disputes among them or defuse office politics; you’ll never have to give them performance reviews or negotiate raises; and you’ll never have to replace them, because they’ll never quit. They’ll just work competently, diligently, and conscientiously, doing whatever you ask. They’ll be every manager’s dream employee. They won’t have a schedule; they’ll work on your schedule, and you can start or stop them at will: run them 24/7 if you want, or call on them once a year—and pay for only what you use, with no commitment or advance notice. Running a team of virtual agents will make managing humans look like herding cats.
AI employees will also be cheap, which means that the capital requirements of many new businesses will be much lower, and with tons of surplus wealth being tossed off by the increasingly automated economy, I expect starting up will become much easier. Many businesses will be started that seem non-viable today, addressing niche markets that can’t support a human team, but can totally support an AI team. An even longer tail of projects will be possible that don’t even rise to the level of businesses: projects that today cost millions, such as movies or apps, will be done by individuals on the side using their spare time and cash.
All of the above is consistent with a model in which AI is capital, not labor, and in which its effect is to multiply labor productivity, increase demand for all kinds of goods and services, create new jobs and industries, and raise the level of technology—all of which should dramatically increase wages. More importantly, it will dramatically increase ownership, which means income will be derived more and more from equity instead of wages.
And even more importantly, it will be a world of dramatically expanded human agency: any idea you have can be made real, with far fewer barriers in the way. In this world, the qualities that will be at a premium are taste, judgment, vision, and courage.
What I expect to happen after that
What happens when the AI is even good enough to be the CEO?
There is a level of management above the CEO: governance. The board of directors. I expect that even if and when humans no longer have to work, we will still be owners, and our role will be to formulate our goals, communicate them, and evaluate if they’re being achieved. Humanity will be the board of directors for the economy and the world.
In such a world—a fully automated economy—I expect that a minority of people will still work, but only those who want to, only those for whom work is rewarding and who find it brings meaning to life (and, like a successful entrepreneur on their second act, their work won’t have to actually earn an income on any timescale). Others will do whatever is most meaningful to them: pursue knowledge and satisfy their curiosity; express their creative vision in art or music; travel and explore; spend time with family and loved ones; play games or sports. (There will always be a role for human players, because the purpose of games and sports is not to achieve a practical outcome but to experience and to witness human ability.)
Are you afraid that humans will get bored in this world? Perhaps you are imagining that it will be calm and static, a Garden of Eden? On the contrary: it will be a far more exciting, dynamic, and fast-paced world than anything humans have known. Nothing will be the same from year to year, let alone decade to decade.
How will humans earn a living? I’m not sure the question will matter or make sense anymore. I don’t think you can plug numbers into an equation that was developed in the 1950s to determine the “wages” of “labor” in a world where those concepts might be obsolete. The capital-labor model didn’t really apply in the agricultural age, when productivity was limited by land and the ability of capital to raise productivity was bounded. It’s not obvious that it would apply in the intelligence age either. Land got taken out of the equation in the industrial age; we moved from a land-labor economy to a labor-capital economy. The intelligence age might be best modeled as a capital-only economy, or a capital-intelligence economy.
Instead of deriving our income through labor, maybe we will derive it through ownership (no matter how society chooses to answer the questions of equity and fairness that arise). Or maybe we will drive it through some other concept or mechanism that is unclear now. Or maybe the question will simply dissolve and feel archaic in retrospect. That future has too many unknown unknowns to do more than speculate.
In any case, the question will make even less sense in the ultimate future in which we have literally exhausted the possibilities allowed by the laws of physics, and have consumed all the matter and energy in our light cone. This is why I find Matthew’s counter-rebuttal about the limits of technological innovation absurd. When we reach those limits, we will have all our needs met instantly and effortlessly, we will be functionally immortal, and we will have colonized the galaxy. To worry about “wages crashing below subsistence levels” in such a world is nonsensical, unless you’re using that as a very strange way to say that people won’t have to work anymore and wouldn’t be able to contribute much to economic production if they wanted to.
All of this is a purely economic analysis, grounded in some basic assumptions, like that AI doesn’t take over the world, or that we don’t decide AIs are legal persons with rights who can negotiate for their own wages. But short of that, I expect that AI will do what all fundamental enabling general-purpose technologies have done throughout all of human history: raise our standard of living and accelerate progress.
Credit to Richard Ngo and Garry Tan whose comments were in my mind as I wrote this.
PS: Just before I hit publish on this, I found out that Sherlock in Japan and hip-hop Beowulf have already been done (so much for my attempt to be original!) The iambic pentameter social network and the corkscrew lemon zester are evidently ideas so bad that no one has built them yet—or so non-obviously good that we’ll have to wait for AGI to try them out.
Telling everyone in creative endeavours that they can "step up into management" sent a cold shiver down my spine. It's the sort of corporate BS that has haunted millions who hate being taken away from the flow-inducing work they love into a structure that is less personally satisfactory.
I think probably the strongest counter argument to this post got handwaved away in this sentence: “Instead of deriving our income through labor, maybe we will derive it through ownership (no matter how society chooses to answer the questions of equity and fairness that arise).”
Let’s say this argument is correct: AI acts like capital in the economy, and as AI improves, the capital end of the equation grows in importance until eventually a capital-only model (to the exclusion of labor) of the economy is sufficient. Capital today is very unevenly distributed, and those who rise from having very little capital to a lot of it typically do so through massive applications of their own (and others’) skilled labor. What happens when that labor, on the margins, doesn’t really matter all that much? Do we get locked into a rigid hierarchy, where those who had the most capital at the moment we invent artificial super intelligence have the most capital forever?