23 Comments

Telling everyone in creative endeavours that they can "step up into management" sent a cold shiver down my spine. It's the sort of corporate BS that has haunted millions who hate being taken away from the flow-inducing work they love into a structure that is less personally satisfactory.

Expand full comment

You can do personally satisfactory, flow-inducing work as a hobby, purely for enjoyment. When you do work for money, you need to care about how productive you are.

When you work in your garden, you do it by hand. When you farm as a job, you drive a tractor.

You can knit sweaters for personal satisfaction. When you manufacture clothing, you use machines.

You can ride horses for recreation. When you want to get somewhere, you use a car.

The great news is that the more productive we all can be, the more leisure time we'll have, and the more we'll be able to do those hobbies.

Expand full comment

That line made me think of the Peter Principle - there are so many people who are good at one job, but aren’t good any more when you “promote” them to managing the people who do that job.

However, I think his next paragraph or two allayed some of my worries. He enumerated many of the parts of management that people tend to fail at - all the human-managing skills (or cat-herding skills).

There are definitely still some issues of moving from, say, being a writer to being an editor, but I think that you get to keep doing many of the parts you enjoy about being a writer, particularly if you’re not working with human writers who want to keep those bits for themselves.

Expand full comment

Excellent essay! The future you describe is something to aim for, but I feel like the ride to get there will be bumpy. As Harari points out, we will have a large population of 'useless class', who may become unsettled as they (we?) struggle to find meaning in the new paradigm. The shifts that occurred off the back of the industrial revolution weren't all smooth sailing by any means! He talks about how communism and other systems were failed experiments as we tried to adjust, but that this time we don't have the option to try and fail. We only have one shot to get this right.

Expand full comment

Yes, I don't think it will be smooth! Although I suspect smoother than the transition that involved communism

Expand full comment

This is my major concern. The governance transition required will be immense, over such a short period of time. How quickly we redistribute the wealth that emerges from AGI will determine whether we have a high-tech low-life dystopia (at least for a time) or a well-governed abundant society.

Expand full comment

I think probably the strongest counter argument to this post got handwaved away in this sentence: “Instead of deriving our income through labor, maybe we will derive it through ownership (no matter how society chooses to answer the questions of equity and fairness that arise).”

Let’s say this argument is correct: AI acts like capital in the economy, and as AI improves, the capital end of the equation grows in importance until eventually a capital-only model (to the exclusion of labor) of the economy is sufficient. Capital today is very unevenly distributed, and those who rise from having very little capital to a lot of it typically do so through massive applications of their own (and others’) skilled labor. What happens when that labor, on the margins, doesn’t really matter all that much? Do we get locked into a rigid hierarchy, where those who had the most capital at the moment we invent artificial super intelligence have the most capital forever?

Expand full comment

No, I think that vision, taste, and judgment can amplify capital (or fritter it away).

(Also in practice I expect large wealth disparities to be reduced through private charity and/or government welfare.)

Expand full comment

Interesting, so it sounds like your position (and please do correct me if I’m wrong!) is something like the following: AI will act like capital, but as it improves, it will continuously decrease the importance of labor in our model of the economy. As that happens, humans will move further up the value chain, running companies employing AI agents. This will lower the barrier to entrepreneurship in two key way:

1. Entrepreneurs can scale up and scale down employees instantly and incrementally, in much the same way that software developers today can pay incrementally for more compute from AWS

2. Many of the key challenges facing CEOs (e.g. recruiting, employee motivation, aligning individuals strategically, etc) will no longer be required, lowering the skill required for the position

This will lead to an explosion in entrepreneurship, with humans only bottlenecked in their ambitions by their own vision and taste. As a result, we can expect the top x% of the population in these skills to become wildly successful, building huge companies with far less time and effort than is required today. For those outside of the top x% of the population, it’s likely that some welfare program such as UBI will be implemented so that they share in the spoils. We can be relatively certain this will happen because, if it doesn’t, the resulting societal unrest will very likely destabilize any government that doesn’t implement widespread welfare programs.

Does that sound about right?

Expand full comment

Something like that, yeah!

Expand full comment

That's interesting! So our trajectory then would look something like a smooth interpolation of today's world, a world where most humans doing productive work are AI managers, and a world where humans are being "paid" in the form of welfare with AI mostly managing itself.

In the short-to-medium term, I definitely jive with the idea that people should be leveling up as much as possible in management ability. Particularly given that, in the test-time compute driven paradigm of DeepSeek R1 and OpenAI o1, it seems that we get linear improvements in performance for exponential increases in inference compute. (See any of the performance charts for o1 or o3, where the x-axis for dollars/compute used is conveniently log-scale). This means that, at least for the time being, there will be limited capital (and hence intelligence) to allocate to each potential problem!

Expand full comment

Heh. At first I read this as "The future of humanity is sin management".

Expand full comment

Well, maybe that too

Expand full comment

There seems to be a hidden assumption in this article about the nature of the general intelligences that we are creating. The type of advanced AI you describe seems to be some sort of a chatbot plus plus — an oracle capable of answering all sorts of questions and with the pseudoagency to perform long-horizon tasks accurately.

— First, is this an accurate steelman?

— If yes, a follow-up question: under other worldviews, we may get general intelligences which have true agency and their desires and wants. In this world, what role do you think humans play?

Expand full comment

Jason, I’ve followed you for some time, and met you once in Cambridge, and largely agree with your optimism about progress in the long term. This post was terribly disappointing as your “expectations” seem more like an apologetic for neoliberalism than a sound, well- thought out argument. Yes, over the long tun agriculture, trade, feudalism, slavery, the industrial and scientific revolutions all produced enormous wealth that - over the long run “raised all boats”. But most of that wealth, if not much of it, went to those who controlled the “capital”, and those without suffered - often horribly. You mention mechanization of the loom with steam (powered by coal). Have you ever read Dickens? Every period of innovation in history that I can think was fueled by workers that were not treated well until they gained power in the economic equation. Egypt? Greece? Rome? Bless you if you were a citizen not a slave. Slavery ends when the cost (including externalities) exceeds the benefits, or by revolt (a form of cost). In this country northern industrialization. need for wage workers in factories and social approbation (real or pretend) made slavery too “costly”. Power ultimately shifted but let’s not forget the cost of the civil war. Feudalism in Europe ended when the Plague wiped out 1/3 of the workers and the landed nobles had no way to manage without making concessions. Monopolization is also a feature of innovation, going to first movers who can control the market (both labor and supply) with impunity. Our government (T Rockefeller) busted the industrial monopolies that were strangling growth. NRockefeller rode the wave of liberalism/labor anger to significant worker-friendly reforms leading to huge gains in sicial welfare and productivity (WW2 helped as did vet benefits). Carter implemented reforms that broke up telephone, oil price contols and utilities. Would we have the internet today if Bell had controlled the market (labor and supply). Do not kid yourself - the next few decades are likely to get ugly as power is accumulating among the billionaire elites - now privileged in terms of government capture here and globally. What reforms can reduce the inequalities to modest levels and avoid social breakdowns that will greatly delay or even destroy the option of your optimistic future. I wrote a post on this - you can read or reprint w attribution if you wish: https://spiralinquiry.org/globalism-accused-tried-convicted-but-the-real-perpetrator-income-inequality/. Thanks for getting me excited!

Expand full comment

> Who… is running the AI? Presumably some corporations? Where are they getting their profits from, if there are no customers left for anything anymore? Has the economy turned completely self-referential, running itself in completely automated fashion while all humans get taken out of the loop? An ouroboros economy, eating its own tail?

The economy is already self-referential. Humans being in the loop doesn't make that not so. Why can't AI simply be the customers and the laborers?

Expand full comment

There's a difference between being partially self-referential and completely self-referential. How and when would humans be removed from the economy? Seems like we would do something about that when it happened, if we hadn't completely lost control?

Expand full comment

> There's a difference between being partially self-referential and completely self-referential.

I don't understand this distinction. Human beings are fully part of the economy. We are our own clients and our own vendors. In what principled sense is the current economy less self-referential than this hypothetical one?

> How and when would humans be removed from the economy?

(Caveats below)

It's just evolutionary pressure favoring mechanisms that efficiently acquire and utilize resources. It's no different than asking "how and when would the mammoth go extinct?"

In detail: Local incentives favor humans purchasing goods and services from the lowest-priced supplier. Except for niche tasks where there's a premium for the nostalgia of a human doing it (analogous to the niche roles horses have today), that will typically be an AI (or whoever controls it) unless a human prices their labor below subsistence, since the AI can do it using less resources than the human takes to subsist in the time it takes that human to do it. Thus labor cannot earn subsistence wages.

Caveats: This assumes the absence of coordination to enact social or political measures to redistribute wealth or restrict the use of AI even when local incentives favor it. Solving the relevant coordination challenges might be extremely difficult, perhaps much more so than those involving carbon emissions. But it's conceivable they will be solved.

Expand full comment

So far, AIs have no desires, so they can’t be customers. One of the ways I’ve long thought of factoring the means of production into land, labor, and capital, is that land is the stuff that has a fixed quantity no matter what, and labor is the stuff whose quantity is tied to the quantity of consumers who have desires, while capital is everything that can be created or destroyed without changing the desires out there. Though I see that this isn’t quite how Jason has distinguished things in the OP.

Expand full comment

In what practically relevant, non-metaphysical sense do humans have desires which does not apply to an RL-trained AI?

Expand full comment

If trends continue and assertions regarding AGI within less than five years prove true, what you’ve outlined seems plausible if not likely. At once, currently realized AI performance could plateau, or degrade, in as much time. The latter might create a perpetual treadmill/carrot conspiracy scenario in which “believers” continue to lust after what (could have been, yet may still!) remains ever possible ‘if only, if only, if only’.

Expand full comment

...and when the AI is good enough to be the board of directors?

Expand full comment

Then the human shareholders elect, and remove, AI directors

Expand full comment