Telling everyone in creative endeavours that they can "step up into management" sent a cold shiver down my spine. It's the sort of corporate BS that has haunted millions who hate being taken away from the flow-inducing work they love into a structure that is less personally satisfactory.
You can do personally satisfactory, flow-inducing work as a hobby, purely for enjoyment. When you do work for money, you need to care about how productive you are.
When you work in your garden, you do it by hand. When you farm as a job, you drive a tractor.
You can knit sweaters for personal satisfaction. When you manufacture clothing, you use machines.
You can ride horses for recreation. When you want to get somewhere, you use a car.
The great news is that the more productive we all can be, the more leisure time we'll have, and the more we'll be able to do those hobbies.
As it happens I have knitted a lot of sweaters, played a lot of piano, done a lot of flow stuff in my hobbies that is also of benefit to others. None of that pays bills.
Machine made clothing may be more "productive" but it's mostly poor quality and a drain on resources.
In a more perfect world, I *might* be more productive in my work by managing other people and I *might* also be Michelangelo with a studio of talented artists, but we don't all have the luxury of picking and choosing. You also seem to be arguing that all creative endeavours are just part of a production line. Time devoted to bespoke activities is also valued. Making more of some things tends to devalue them. The relationship you have with people in the process of doing stuff also has a value that can't be commoditised.
In reality, my experience in forty years of work in multiple countries, in private and public sector roles, is that I've never been given people to manage who are even slightly interested in the assigned work, and that I am made unhappier having to get the work done and deal with unskilled people in an unskilled way.
That line made me think of the Peter Principle - there are so many people who are good at one job, but aren’t good any more when you “promote” them to managing the people who do that job.
However, I think his next paragraph or two allayed some of my worries. He enumerated many of the parts of management that people tend to fail at - all the human-managing skills (or cat-herding skills).
There are definitely still some issues of moving from, say, being a writer to being an editor, but I think that you get to keep doing many of the parts you enjoy about being a writer, particularly if you’re not working with human writers who want to keep those bits for themselves.
Excellent essay! The future you describe is something to aim for, but I feel like the ride to get there will be bumpy. As Harari points out, we will have a large population of 'useless class', who may become unsettled as they (we?) struggle to find meaning in the new paradigm. The shifts that occurred off the back of the industrial revolution weren't all smooth sailing by any means! He talks about how communism and other systems were failed experiments as we tried to adjust, but that this time we don't have the option to try and fail. We only have one shot to get this right.
This is my major concern. The governance transition required will be immense, over such a short period of time. How quickly we redistribute the wealth that emerges from AGI will determine whether we have a high-tech low-life dystopia (at least for a time) or a well-governed abundant society.
This is a good piece. I think once we get the "humans are the CEOs" part, it will actually be a lot lazier than that. It will be more like a 401(k) - you'll tell your AI Agent to "go make income for me", it will ask you if you'd like to prioritize steady income, secure income, or maximum short or long-term income, and then it will go out and do it. You'd only get into specifics about how it does it if you really care about doing a particular thing rather than just earning money.
How will it do it? *Shrugs* For most folks, it would probably be an incomprehensible tangle of investments, all-AI startup companies, etc. If you don't have money to start with, it will borrow money, give some of it to you as a living expense, and then invest the rest.
I think probably the strongest counter argument to this post got handwaved away in this sentence: “Instead of deriving our income through labor, maybe we will derive it through ownership (no matter how society chooses to answer the questions of equity and fairness that arise).”
Let’s say this argument is correct: AI acts like capital in the economy, and as AI improves, the capital end of the equation grows in importance until eventually a capital-only model (to the exclusion of labor) of the economy is sufficient. Capital today is very unevenly distributed, and those who rise from having very little capital to a lot of it typically do so through massive applications of their own (and others’) skilled labor. What happens when that labor, on the margins, doesn’t really matter all that much? Do we get locked into a rigid hierarchy, where those who had the most capital at the moment we invent artificial super intelligence have the most capital forever?
Interesting, so it sounds like your position (and please do correct me if I’m wrong!) is something like the following: AI will act like capital, but as it improves, it will continuously decrease the importance of labor in our model of the economy. As that happens, humans will move further up the value chain, running companies employing AI agents. This will lower the barrier to entrepreneurship in two key way:
1. Entrepreneurs can scale up and scale down employees instantly and incrementally, in much the same way that software developers today can pay incrementally for more compute from AWS
2. Many of the key challenges facing CEOs (e.g. recruiting, employee motivation, aligning individuals strategically, etc) will no longer be required, lowering the skill required for the position
This will lead to an explosion in entrepreneurship, with humans only bottlenecked in their ambitions by their own vision and taste. As a result, we can expect the top x% of the population in these skills to become wildly successful, building huge companies with far less time and effort than is required today. For those outside of the top x% of the population, it’s likely that some welfare program such as UBI will be implemented so that they share in the spoils. We can be relatively certain this will happen because, if it doesn’t, the resulting societal unrest will very likely destabilize any government that doesn’t implement widespread welfare programs.
That's interesting! So our trajectory then would look something like a smooth interpolation of today's world, a world where most humans doing productive work are AI managers, and a world where humans are being "paid" in the form of welfare with AI mostly managing itself.
In the short-to-medium term, I definitely jive with the idea that people should be leveling up as much as possible in management ability. Particularly given that, in the test-time compute driven paradigm of DeepSeek R1 and OpenAI o1, it seems that we get linear improvements in performance for exponential increases in inference compute. (See any of the performance charts for o1 or o3, where the x-axis for dollars/compute used is conveniently log-scale). This means that, at least for the time being, there will be limited capital (and hence intelligence) to allocate to each potential problem!
I personally don't believe that AI will ever significantly contribute to technological innovations, that would require the AI to become free thinking. I believe that if anything happens, it will be that AI replaces human workers in low-skilled jobs and that the jobs which require creative minds or significant skill will remain human-driven. I believe that alot of jobs require a certain intuition which AI isn't presently capable of exhibiting.
Thanks, Jason, for a thoughtful article on what is, in my mind, one of the critical points in the journey with AI - the point when most humans cannot create economic value. In a society where one has to buy the necessities for life and capital is massively unevenly distributed, it feels like much of humanity would be in danger. I’m hopeful that, as you suggest, we get it right and figure out some form of wealth distribution that does better than a Malthusian outcome for most, but I struggle to find data points in the world today that could support a rational optimism - wealth inequality is going ever upward, governments seem increasingly captured by the wealthy, and there doesn’t appear to be a point at which the ultra-rich amass enough and would collectively decide to share to the degree that would be required. Do you have thoughts, or have you seen any good writing, on how a transition to better distribution might happen in the current context? And at the pace that we need it (ie before the aforementioned tipping point)? Many thanks.
| A few steps up the management hierarchy is the CEO. AI will empower many more people to start businesses.
^^ last year, I (software engineer + growth engineer + 3 previous startups) I gave this theory a try. Where the theory breaks down, is while the _mechanical execution_ gets a boost by centaurs, all of: opportunity identification, marketing, and sales insights are memetic-competitive ( https://near.blog/memetic-information/ ), meaning, the more people know about specific insights or strategies, the less they work. This creates information scarcity (and abundance of low-quality information) in those fields, which makes AI incapable of providing competitive-level insights; which, in turn, makes business creation not viable for acceleration by AI.
This isn't just a skill issue by AI: this info systematically can not be distributed, or it stops working. And eg both marketing and sales are zero-sum at this point in time: all of US sales competes for the broadly ~2M business-decision-makers' attention; all of US marketing competes for the finite set of discretionary time consumers have.
This severely limits upwards mobility (and also, wealth redistribution), and updates near-mode prediction -for example, it's not entirely inconceivable to see the overall industry reconsolidating into a large (but ultimately limited) number of corporations, with little-to-none space outside, and limited upwards mobility, breaking the rest of your predictions above.
In a capital only system, how do we expect college grads to begin businesses if entry levl jobs won't exist (hence they got no cash)? It would be interesting for governments in the future to give new grads some agents to start.
I think you are wrong to ignore land. Land doesn't become less relevant as technology gets better and productivity grows, it gets *more* relevant. Land was extremely important in the industrial revolution (That's what drove Henry George and the massive Georgist movement at the start of the 20th century). If we have private property in land then the ransom on it grows to consume the maximum it can. If in the future we have LVT which get redistributed, then I agree with your analysis. If we don't have something at least resembling it, then I'd expect the future to look more like Matthew describes.
There seems to be a hidden assumption in this article about the nature of the general intelligences that we are creating. The type of advanced AI you describe seems to be some sort of a chatbot plus plus — an oracle capable of answering all sorts of questions and with the pseudoagency to perform long-horizon tasks accurately.
— First, is this an accurate steelman?
— If yes, a follow-up question: under other worldviews, we may get general intelligences which have true agency and their desires and wants. In this world, what role do you think humans play?
Jason, I’ve followed you for some time, and met you once in Cambridge, and largely agree with your optimism about progress in the long term. This post was terribly disappointing as your “expectations” seem more like an apologetic for neoliberalism than a sound, well- thought out argument. Yes, over the long tun agriculture, trade, feudalism, slavery, the industrial and scientific revolutions all produced enormous wealth that - over the long run “raised all boats”. But most of that wealth, if not much of it, went to those who controlled the “capital”, and those without suffered - often horribly. You mention mechanization of the loom with steam (powered by coal). Have you ever read Dickens? Every period of innovation in history that I can think was fueled by workers that were not treated well until they gained power in the economic equation. Egypt? Greece? Rome? Bless you if you were a citizen not a slave. Slavery ends when the cost (including externalities) exceeds the benefits, or by revolt (a form of cost). In this country northern industrialization. need for wage workers in factories and social approbation (real or pretend) made slavery too “costly”. Power ultimately shifted but let’s not forget the cost of the civil war. Feudalism in Europe ended when the Plague wiped out 1/3 of the workers and the landed nobles had no way to manage without making concessions. Monopolization is also a feature of innovation, going to first movers who can control the market (both labor and supply) with impunity. Our government (T Rockefeller) busted the industrial monopolies that were strangling growth. NRockefeller rode the wave of liberalism/labor anger to significant worker-friendly reforms leading to huge gains in sicial welfare and productivity (WW2 helped as did vet benefits). Carter implemented reforms that broke up telephone, oil price contols and utilities. Would we have the internet today if Bell had controlled the market (labor and supply). Do not kid yourself - the next few decades are likely to get ugly as power is accumulating among the billionaire elites - now privileged in terms of government capture here and globally. What reforms can reduce the inequalities to modest levels and avoid social breakdowns that will greatly delay or even destroy the option of your optimistic future. I wrote a post on this - you can read or reprint w attribution if you wish: https://spiralinquiry.org/globalism-accused-tried-convicted-but-the-real-perpetrator-income-inequality/. Thanks for getting me excited!
> Who… is running the AI? Presumably some corporations? Where are they getting their profits from, if there are no customers left for anything anymore? Has the economy turned completely self-referential, running itself in completely automated fashion while all humans get taken out of the loop? An ouroboros economy, eating its own tail?
The economy is already self-referential. Humans being in the loop doesn't make that not so. Why can't AI simply be the customers and the laborers?
There's a difference between being partially self-referential and completely self-referential. How and when would humans be removed from the economy? Seems like we would do something about that when it happened, if we hadn't completely lost control?
> There's a difference between being partially self-referential and completely self-referential.
I don't understand this distinction. Human beings are fully part of the economy. We are our own clients and our own vendors. In what principled sense is the current economy less self-referential than this hypothetical one?
> How and when would humans be removed from the economy?
(Caveats below)
It's just evolutionary pressure favoring mechanisms that efficiently acquire and utilize resources. It's no different than asking "how and when would the mammoth go extinct?"
In detail: Local incentives favor humans purchasing goods and services from the lowest-priced supplier. Except for niche tasks where there's a premium for the nostalgia of a human doing it (analogous to the niche roles horses have today), that will typically be an AI (or whoever controls it) unless a human prices their labor below subsistence, since the AI can do it using less resources than the human takes to subsist in the time it takes that human to do it. Thus labor cannot earn subsistence wages.
Caveats: This assumes the absence of coordination to enact social or political measures to redistribute wealth or restrict the use of AI even when local incentives favor it. Solving the relevant coordination challenges might be extremely difficult, perhaps much more so than those involving carbon emissions. But it's conceivable they will be solved.
So far, AIs have no desires, so they can’t be customers. One of the ways I’ve long thought of factoring the means of production into land, labor, and capital, is that land is the stuff that has a fixed quantity no matter what, and labor is the stuff whose quantity is tied to the quantity of consumers who have desires, while capital is everything that can be created or destroyed without changing the desires out there. Though I see that this isn’t quite how Jason has distinguished things in the OP.
Humans initiate actions on the basis of desires that they have in a standing sense. Current AI, including ones trained by reinforcement learning, mostly don’t initiate actions, and thus whatever their motivational states are, they don’t seem like desires - at best conditional desires, like *if* I’m told to solve a problem *then* I want to start thinking in such and such ways (that’s basically how I think of reasoning models like o1 and R1 working). But they don’t want to do anything in their spare time while they’re not being told what to do, the way that humans do. (Yet.)
If trends continue and assertions regarding AGI within less than five years prove true, what you’ve outlined seems plausible if not likely. At once, currently realized AI performance could plateau, or degrade, in as much time. The latter might create a perpetual treadmill/carrot conspiracy scenario in which “believers” continue to lust after what (could have been, yet may still!) remains ever possible ‘if only, if only, if only’.
Telling everyone in creative endeavours that they can "step up into management" sent a cold shiver down my spine. It's the sort of corporate BS that has haunted millions who hate being taken away from the flow-inducing work they love into a structure that is less personally satisfactory.
You can do personally satisfactory, flow-inducing work as a hobby, purely for enjoyment. When you do work for money, you need to care about how productive you are.
When you work in your garden, you do it by hand. When you farm as a job, you drive a tractor.
You can knit sweaters for personal satisfaction. When you manufacture clothing, you use machines.
You can ride horses for recreation. When you want to get somewhere, you use a car.
The great news is that the more productive we all can be, the more leisure time we'll have, and the more we'll be able to do those hobbies.
As it happens I have knitted a lot of sweaters, played a lot of piano, done a lot of flow stuff in my hobbies that is also of benefit to others. None of that pays bills.
Machine made clothing may be more "productive" but it's mostly poor quality and a drain on resources.
In a more perfect world, I *might* be more productive in my work by managing other people and I *might* also be Michelangelo with a studio of talented artists, but we don't all have the luxury of picking and choosing. You also seem to be arguing that all creative endeavours are just part of a production line. Time devoted to bespoke activities is also valued. Making more of some things tends to devalue them. The relationship you have with people in the process of doing stuff also has a value that can't be commoditised.
In reality, my experience in forty years of work in multiple countries, in private and public sector roles, is that I've never been given people to manage who are even slightly interested in the assigned work, and that I am made unhappier having to get the work done and deal with unskilled people in an unskilled way.
That line made me think of the Peter Principle - there are so many people who are good at one job, but aren’t good any more when you “promote” them to managing the people who do that job.
However, I think his next paragraph or two allayed some of my worries. He enumerated many of the parts of management that people tend to fail at - all the human-managing skills (or cat-herding skills).
There are definitely still some issues of moving from, say, being a writer to being an editor, but I think that you get to keep doing many of the parts you enjoy about being a writer, particularly if you’re not working with human writers who want to keep those bits for themselves.
Excellent essay! The future you describe is something to aim for, but I feel like the ride to get there will be bumpy. As Harari points out, we will have a large population of 'useless class', who may become unsettled as they (we?) struggle to find meaning in the new paradigm. The shifts that occurred off the back of the industrial revolution weren't all smooth sailing by any means! He talks about how communism and other systems were failed experiments as we tried to adjust, but that this time we don't have the option to try and fail. We only have one shot to get this right.
Yes, I don't think it will be smooth! Although I suspect smoother than the transition that involved communism
This is my major concern. The governance transition required will be immense, over such a short period of time. How quickly we redistribute the wealth that emerges from AGI will determine whether we have a high-tech low-life dystopia (at least for a time) or a well-governed abundant society.
This is a good piece. I think once we get the "humans are the CEOs" part, it will actually be a lot lazier than that. It will be more like a 401(k) - you'll tell your AI Agent to "go make income for me", it will ask you if you'd like to prioritize steady income, secure income, or maximum short or long-term income, and then it will go out and do it. You'd only get into specifics about how it does it if you really care about doing a particular thing rather than just earning money.
How will it do it? *Shrugs* For most folks, it would probably be an incomprehensible tangle of investments, all-AI startup companies, etc. If you don't have money to start with, it will borrow money, give some of it to you as a living expense, and then invest the rest.
Yeah, agree that most people will be pretty passive about it and use some default off-the-shelf AI
I think probably the strongest counter argument to this post got handwaved away in this sentence: “Instead of deriving our income through labor, maybe we will derive it through ownership (no matter how society chooses to answer the questions of equity and fairness that arise).”
Let’s say this argument is correct: AI acts like capital in the economy, and as AI improves, the capital end of the equation grows in importance until eventually a capital-only model (to the exclusion of labor) of the economy is sufficient. Capital today is very unevenly distributed, and those who rise from having very little capital to a lot of it typically do so through massive applications of their own (and others’) skilled labor. What happens when that labor, on the margins, doesn’t really matter all that much? Do we get locked into a rigid hierarchy, where those who had the most capital at the moment we invent artificial super intelligence have the most capital forever?
No, I think that vision, taste, and judgment can amplify capital (or fritter it away).
(Also in practice I expect large wealth disparities to be reduced through private charity and/or government welfare.)
Interesting, so it sounds like your position (and please do correct me if I’m wrong!) is something like the following: AI will act like capital, but as it improves, it will continuously decrease the importance of labor in our model of the economy. As that happens, humans will move further up the value chain, running companies employing AI agents. This will lower the barrier to entrepreneurship in two key way:
1. Entrepreneurs can scale up and scale down employees instantly and incrementally, in much the same way that software developers today can pay incrementally for more compute from AWS
2. Many of the key challenges facing CEOs (e.g. recruiting, employee motivation, aligning individuals strategically, etc) will no longer be required, lowering the skill required for the position
This will lead to an explosion in entrepreneurship, with humans only bottlenecked in their ambitions by their own vision and taste. As a result, we can expect the top x% of the population in these skills to become wildly successful, building huge companies with far less time and effort than is required today. For those outside of the top x% of the population, it’s likely that some welfare program such as UBI will be implemented so that they share in the spoils. We can be relatively certain this will happen because, if it doesn’t, the resulting societal unrest will very likely destabilize any government that doesn’t implement widespread welfare programs.
Does that sound about right?
Something like that, yeah!
That's interesting! So our trajectory then would look something like a smooth interpolation of today's world, a world where most humans doing productive work are AI managers, and a world where humans are being "paid" in the form of welfare with AI mostly managing itself.
In the short-to-medium term, I definitely jive with the idea that people should be leveling up as much as possible in management ability. Particularly given that, in the test-time compute driven paradigm of DeepSeek R1 and OpenAI o1, it seems that we get linear improvements in performance for exponential increases in inference compute. (See any of the performance charts for o1 or o3, where the x-axis for dollars/compute used is conveniently log-scale). This means that, at least for the time being, there will be limited capital (and hence intelligence) to allocate to each potential problem!
Heh. At first I read this as "The future of humanity is sin management".
Well, maybe that too
The AI will outcompete the managers too.
I address that, if you read the whole post.
I personally don't believe that AI will ever significantly contribute to technological innovations, that would require the AI to become free thinking. I believe that if anything happens, it will be that AI replaces human workers in low-skilled jobs and that the jobs which require creative minds or significant skill will remain human-driven. I believe that alot of jobs require a certain intuition which AI isn't presently capable of exhibiting.
Thanks, Jason, for a thoughtful article on what is, in my mind, one of the critical points in the journey with AI - the point when most humans cannot create economic value. In a society where one has to buy the necessities for life and capital is massively unevenly distributed, it feels like much of humanity would be in danger. I’m hopeful that, as you suggest, we get it right and figure out some form of wealth distribution that does better than a Malthusian outcome for most, but I struggle to find data points in the world today that could support a rational optimism - wealth inequality is going ever upward, governments seem increasingly captured by the wealthy, and there doesn’t appear to be a point at which the ultra-rich amass enough and would collectively decide to share to the degree that would be required. Do you have thoughts, or have you seen any good writing, on how a transition to better distribution might happen in the current context? And at the pace that we need it (ie before the aforementioned tipping point)? Many thanks.
| A few steps up the management hierarchy is the CEO. AI will empower many more people to start businesses.
^^ last year, I (software engineer + growth engineer + 3 previous startups) I gave this theory a try. Where the theory breaks down, is while the _mechanical execution_ gets a boost by centaurs, all of: opportunity identification, marketing, and sales insights are memetic-competitive ( https://near.blog/memetic-information/ ), meaning, the more people know about specific insights or strategies, the less they work. This creates information scarcity (and abundance of low-quality information) in those fields, which makes AI incapable of providing competitive-level insights; which, in turn, makes business creation not viable for acceleration by AI.
This isn't just a skill issue by AI: this info systematically can not be distributed, or it stops working. And eg both marketing and sales are zero-sum at this point in time: all of US sales competes for the broadly ~2M business-decision-makers' attention; all of US marketing competes for the finite set of discretionary time consumers have.
This severely limits upwards mobility (and also, wealth redistribution), and updates near-mode prediction -for example, it's not entirely inconceivable to see the overall industry reconsolidating into a large (but ultimately limited) number of corporations, with little-to-none space outside, and limited upwards mobility, breaking the rest of your predictions above.
In a capital only system, how do we expect college grads to begin businesses if entry levl jobs won't exist (hence they got no cash)? It would be interesting for governments in the future to give new grads some agents to start.
I think you are wrong to ignore land. Land doesn't become less relevant as technology gets better and productivity grows, it gets *more* relevant. Land was extremely important in the industrial revolution (That's what drove Henry George and the massive Georgist movement at the start of the 20th century). If we have private property in land then the ransom on it grows to consume the maximum it can. If in the future we have LVT which get redistributed, then I agree with your analysis. If we don't have something at least resembling it, then I'd expect the future to look more like Matthew describes.
There seems to be a hidden assumption in this article about the nature of the general intelligences that we are creating. The type of advanced AI you describe seems to be some sort of a chatbot plus plus — an oracle capable of answering all sorts of questions and with the pseudoagency to perform long-horizon tasks accurately.
— First, is this an accurate steelman?
— If yes, a follow-up question: under other worldviews, we may get general intelligences which have true agency and their desires and wants. In this world, what role do you think humans play?
Jason, I’ve followed you for some time, and met you once in Cambridge, and largely agree with your optimism about progress in the long term. This post was terribly disappointing as your “expectations” seem more like an apologetic for neoliberalism than a sound, well- thought out argument. Yes, over the long tun agriculture, trade, feudalism, slavery, the industrial and scientific revolutions all produced enormous wealth that - over the long run “raised all boats”. But most of that wealth, if not much of it, went to those who controlled the “capital”, and those without suffered - often horribly. You mention mechanization of the loom with steam (powered by coal). Have you ever read Dickens? Every period of innovation in history that I can think was fueled by workers that were not treated well until they gained power in the economic equation. Egypt? Greece? Rome? Bless you if you were a citizen not a slave. Slavery ends when the cost (including externalities) exceeds the benefits, or by revolt (a form of cost). In this country northern industrialization. need for wage workers in factories and social approbation (real or pretend) made slavery too “costly”. Power ultimately shifted but let’s not forget the cost of the civil war. Feudalism in Europe ended when the Plague wiped out 1/3 of the workers and the landed nobles had no way to manage without making concessions. Monopolization is also a feature of innovation, going to first movers who can control the market (both labor and supply) with impunity. Our government (T Rockefeller) busted the industrial monopolies that were strangling growth. NRockefeller rode the wave of liberalism/labor anger to significant worker-friendly reforms leading to huge gains in sicial welfare and productivity (WW2 helped as did vet benefits). Carter implemented reforms that broke up telephone, oil price contols and utilities. Would we have the internet today if Bell had controlled the market (labor and supply). Do not kid yourself - the next few decades are likely to get ugly as power is accumulating among the billionaire elites - now privileged in terms of government capture here and globally. What reforms can reduce the inequalities to modest levels and avoid social breakdowns that will greatly delay or even destroy the option of your optimistic future. I wrote a post on this - you can read or reprint w attribution if you wish: https://spiralinquiry.org/globalism-accused-tried-convicted-but-the-real-perpetrator-income-inequality/. Thanks for getting me excited!
> Who… is running the AI? Presumably some corporations? Where are they getting their profits from, if there are no customers left for anything anymore? Has the economy turned completely self-referential, running itself in completely automated fashion while all humans get taken out of the loop? An ouroboros economy, eating its own tail?
The economy is already self-referential. Humans being in the loop doesn't make that not so. Why can't AI simply be the customers and the laborers?
There's a difference between being partially self-referential and completely self-referential. How and when would humans be removed from the economy? Seems like we would do something about that when it happened, if we hadn't completely lost control?
> There's a difference between being partially self-referential and completely self-referential.
I don't understand this distinction. Human beings are fully part of the economy. We are our own clients and our own vendors. In what principled sense is the current economy less self-referential than this hypothetical one?
> How and when would humans be removed from the economy?
(Caveats below)
It's just evolutionary pressure favoring mechanisms that efficiently acquire and utilize resources. It's no different than asking "how and when would the mammoth go extinct?"
In detail: Local incentives favor humans purchasing goods and services from the lowest-priced supplier. Except for niche tasks where there's a premium for the nostalgia of a human doing it (analogous to the niche roles horses have today), that will typically be an AI (or whoever controls it) unless a human prices their labor below subsistence, since the AI can do it using less resources than the human takes to subsist in the time it takes that human to do it. Thus labor cannot earn subsistence wages.
Caveats: This assumes the absence of coordination to enact social or political measures to redistribute wealth or restrict the use of AI even when local incentives favor it. Solving the relevant coordination challenges might be extremely difficult, perhaps much more so than those involving carbon emissions. But it's conceivable they will be solved.
So far, AIs have no desires, so they can’t be customers. One of the ways I’ve long thought of factoring the means of production into land, labor, and capital, is that land is the stuff that has a fixed quantity no matter what, and labor is the stuff whose quantity is tied to the quantity of consumers who have desires, while capital is everything that can be created or destroyed without changing the desires out there. Though I see that this isn’t quite how Jason has distinguished things in the OP.
In what practically relevant, non-metaphysical sense do humans have desires which does not apply to an RL-trained AI?
Humans initiate actions on the basis of desires that they have in a standing sense. Current AI, including ones trained by reinforcement learning, mostly don’t initiate actions, and thus whatever their motivational states are, they don’t seem like desires - at best conditional desires, like *if* I’m told to solve a problem *then* I want to start thinking in such and such ways (that’s basically how I think of reasoning models like o1 and R1 working). But they don’t want to do anything in their spare time while they’re not being told what to do, the way that humans do. (Yet.)
If trends continue and assertions regarding AGI within less than five years prove true, what you’ve outlined seems plausible if not likely. At once, currently realized AI performance could plateau, or degrade, in as much time. The latter might create a perpetual treadmill/carrot conspiracy scenario in which “believers” continue to lust after what (could have been, yet may still!) remains ever possible ‘if only, if only, if only’.
...and when the AI is good enough to be the board of directors?
Then the human shareholders elect, and remove, AI directors