21 Comments
User's avatar
Eugine Nier's avatar

My theory is that generating innovation requires extremely non-arbitrary social arrangements. Unfortunately, those very innovations tend to be disruptive, thus liable to push society out of the region ideal for innovation.

Expand full comment
Nathan Smith's avatar

The economic theories surrounding economic growth, technological change, and ideas are important to understand, and kudos for putting this explainer out there.

But let me challenge you to take it further, in a certain direction: *map the space of technological possibilities.*.

Economists have a concept called a "production function," which describes outputs as a function of inputs. But overwhelmingly of not exclusively, production functions are framed in abstract terms, like "GDP" being a function of "capital" and "labor."

Real supply chains, meanwhile, and links production functions that are More concrete and specific. You need such and such an amount of coal and turbines to make such and such an amount of electricity. That sort of thing.

You can think of it as a network of nodes and directional links.

Invention expands the map. New nudes are added. That can be very useful. But exploiration of the technology space should not be taken for granted. There are inventions which are obsolete, no longer useful. There can be inventions for which the economy is not ready.

Invention may be less important as a determinant of progress than specialization and division of labor, and/or capital accumulation.

Where I think in a you're in almost a uniquely brilliant position to contribute is that the intersection of economics and engineering. Work on mapping what is known. I think you'll find that while more research may be needed in some places, there are a lot of useful but unimplemented, or insufficiently implemented, technologies already known, and investment and better organization are more important at this stage than new discoveries.

Of course, if we were exploiting the known technology space more thoroughly, that would encourage discovery and invention as well.

Expand full comment
Vakus Drake's avatar

One flaw I see in you analysis is that you are only considering growth over timescales that are still extremely short compared to that of human civilization overall. Whereas if you consider things centuries or millennia ahead then it's obvious that there's numerous physical limits that do not allow for you to sustain even a small exponential growth rate long enough to colonize more than a miniscule fraction of the galaxy.

The issue is that the maximum speed you can gather resources is always going to be at most geometric. Since if you were to expand to colonize everything at near lightspeed in an expanding sphere, then that would still be a geometric growth. Whereas exponential growth will always eventually outstrip this:

If you do the math for an 8 billion population with a 2% growth rate, then in 20k years every Planck volume needs to have twelve quadrillion people in it. With a 1% growth rate it's 1.4142695e-98 cubic cm per person or 3.35e+69 Planck volumes per person. Meanwhile a hydrogen atom is like ~200,000 times bigger than that at 1.5e+74 Planck volumes.

So I've come to the conclusion that without a singleton AI or some authoritarian world government (which needs to emerge prior to major space colonization) you will *eventually* have the inner parts of the future human expansion sphere turn into a post biological Malthusian hell.

Since the overpopulation issues I describe won't kick in for centuries at least, meaning by the time people realize the danger civilization will be too spread out to hope to feasibly restrict every groups reproduction. I say post biological because once resource scarcity become dire digital minds will be able to support themselves for a lot longer because they can in principle be vastly more resource efficient.

So you have a strong impetus to try to stay on the frontier of expansion so you can avoid future resource scarcity (as long as your particular expansion fleet has some internal regulation to restrict its growth, of many mediocre and one good solution exists). Then you want to leave the space behind you totally mined out and filled with automated defenses, so nobody finds it worthwhile to try to follow you. Thus preventing competition for the resources you're trying to claim. If you keep going long enough then you will be over the Hubble horizon from your competitors, at which point you never have to deal with outside competition again.

As for your point about birth rates reversing I would point out a good data point you should have included: Which is that if you look specifically at wealthy women then their birth rates are very notably higher than the rest of their society, but also in line with how many children women say they'd *like* to have given the opportunity. Suggesting that underpopulation won't be a short-medium term problem if everyone reaches the QOL of currently rich people.

Expand full comment
Swami's avatar

Insightful comment.

I would offer two complementary responses.

First, worrying about growth rates centuries or millennia in advance is probably not very worthwhile. Too many unknown unknowns. We should focus on the part of the forward trail that is within our view and potential understanding.

Second and more importantly, as JC implies in his discussion of AI researchers, it just is not the case that growth requires more resources. It can require more resources, and honestly usually does. However, it is possible to get more efficient and achieve growth at the same time.

A virtual reality hell and a virtual reality utopia, for example, could require exactly the same resources. But one is the ultimate regression, and the other the ultimate progress.

In other words, qualitative improvements do not necessarily require quantitative increases in energy or resources.

Expand full comment
Vakus Drake's avatar

Had to send this on mobile, so pardon the formatting.

There's a reason I focused on specifically population: However little resources a digital mind requires is at minimum determined by the Landauer limit, and you can't make computers below the Planck scale. My analysis basically assumes infinite energy, and shows you still run out of physical space in which to put things and people regardless of their substrate.

As for timing:

Firstly, unless anti aging research hits a wall pretty soon, these are things that may actually affect you. And you have a strong incentive to want to be on the frontier of expansion, which means setting off *before* things get bad. Also things like super fast (and fast reproducing) digital minds or people being sufficiently wasteful could massively shorten the time before a Malthusian trap.

Secondly, these problems if they are to be addressed must be done so way before they kick in. Since enforcement of any countermeasures becomes impossible unless you ensure something like an enforcer ai being sent in every colony ship from the outset of expansion.

For instance if these things aren't considered when making AI you might lock yourselves out of being able to do anything about the problem. Other than get out of dodge early.

Expand full comment
Swami's avatar

Thanks, but I am still confused. The essay is not arguing for population growth. It is arguing for a growth in ideas — over the reasonably foreseeable future.

Perhaps it would be useful for me and other people following the conversation if you clarified what physical limits affect the issue over the next few decades?

Expand full comment
Vakus Drake's avatar

Basically you can physically only use resources so efficiently. So you do eventually have to reach a "peak resource" situation you genuinely can't escape from. Even if you had unlimited energy, physical space itself would eventually become a bottleneck.

While this is definitely an issue long term there's a number of things that could push this problem ahead to within a few decades. For instance digital minds running at 1000x speed may also reproduce at 1000x speed. Additionally self replicating machines aren't limited by human birth rates, so if people hoard/waste enough resources via automation that could dramatically speed things along.

Additionally even if these issues don't arise in the next few decades, we still likely need to plan for them soon. Simply because you might otherwise pass the critical period where you can do anything to address the issue.

Expand full comment
Swami's avatar

Yes, we need to pursue progress and abundance today with a thought to the longer term future as it develops.

Expand full comment
Nathan Smith's avatar

I made a presentation about it some years ago. I think you would find it very beneficial to take a look.

Let me know if you'd like me to write a guest post about it. I'd like to connect with your audience.

https://drive.google.com/file/d/1pc1s3rPb1y1bOdmjVUTUxnmcMnS09R0y/view?usp=drivesdk

Expand full comment
Vakus Drake's avatar

I'd be interested in that and I read your presentation.

We'll just need to decide how long we want to make this/how to approach the topic. Because there's there's several integrally connected subjects here, which can only be fully appreciated in light of each other.

Here's a 1,748 word rough draft that briefly goes into all of the different parts of the subject, let me know what you think:

https://docs.google.com/document/d/1hriepzt7W7k7Eq33pHr7Txol5Rzzj0Vs3PWl6v_KiLM/edit?usp=sharing

Expand full comment
Nathan Smith's avatar

That's a very thought-provoking article. I could offer a take on those topics along the lines of the presentation I linked above if that's of interest. Is that what you had in mind?

I think the essay that you linked would benefit by being more explicitly and carefully grounded in traditional utility theory as economists have long developed it. But that wouldn't support some of the directions that you're trying out. For example, some of what you write feels like transhumanism, but utility theory essentially assumes that human nature is a given. It requires the utility function to be exogenously fixed in order to have something to maximize.

I think utility theory is how valuable mental discipline but very incomplete. But my point of departure would be different.

Expand full comment
Vakus Drake's avatar

Ok I just spent a fair number of hours trying to think of how to try to model these things economically. To do it justice I think I'd need to model things in a few different ways. So give me your thoughts on this group of models:

I should note I'm mostly modelling things for post-biological civilizations for two reasons. For one it simplifies things a lot, since utility ends up being largely reducible to compute. Secondly there's strong convergent reasons for any civilization to converge towards being post-biological in the long term, because of just how much more dramatically efficient it is (meaning factions that go that route will expand faster, until the original biological people are rounding errors). So I would expect any civilization to by default converge closer and closer to the model I present over time.

First in any article I'd want to start with a jumping off point that is easier to model within conventional economics:

The Federation is a good example of close to a perfectly efficient meritocratic and purely status based economy. Since the remaining material scarcity in the form of say dilithium crystals doesn't really impact its citizens much. So this should still be easy enough to apply traditional economic modelling to, since people are competing for finite status and have different comparative advantages. In this context you need to replace money with a sort of abstract measure of labor value, but honestly I don't know that that changes literally anything.

Now the next step is to consider economics once humans are no longer the driving force in the economy. Making humans most similar to capital whose utility generates utility for the aligned AI. You have to model this in two parts.

First if we assume the default scenario where competition isn't reigned in by civilization wide regulation, then economics really looks a lot more like physics. This can be split up into a macro and micro lens:

From a macroeconomic standpoint civilization can be modelled as an outwards travelling wave. As everyone is incentivized to expand to avoid waste heat pollution as an externality, and to claim as many resources as possible. So this wave would travel at some fraction of lightspeed and have a thickness determined by how efficiently self replicating machines can produce more of themselves. Of course on a smaller scale this smooth wave is made up of many individual fleets hopping between small bodies. Since at a certain tech level you needn't build generation ships, since space is full of stopovers in the forms of dwarf/rogue planets in the outer system and between the stars.

From a microeconomic standpoint if you are maximizing utility then trying to expand as fast as possible towards unclaimed resources does that.

Since you want to build the largest possible stockpile of resources you can before the later stages of the universe. Both because you will eventually run out of new resources to collect, and because the longer you wait to burn through these resources the more efficiently you can do so. Since the Landauer limit is based on temperature, the closer you get to absolute zero the exponentially more efficient your computing will be. Which for a post-biological civilization translates to getting more total utility for the same resources.

You'd also be potentially incentivized to ensure machines totally mine out the path behind you and set up automated defenses, such that it wouldn't be worth it to try to follow you to compete for resources. The amount of resources spent most efficiently on booby traps would be determined by how valuable the resources you're claiming are. Similarly this would also potentially impact the optimal speed to expand. So variation in resource distribution would lead to divergences from this model on the small scale. However, on large enough scales resources become homogeneous enough that these variations ought to cancel out.

Within this lens things can be modelled for self replicating machines as a sort of weirdly solipsistic economy: Where machines take resources and turn them into capital (the computers/people within them) as well as stockpile them for later use. This capital can then be used for producing utility in the form of the experiences had by those within the computers' simulations. What's interesting here is that utility ends up having a reverse time discounting curve, because you can get more utility from the same resources in the future when it's colder.

Interestingly this means that from a big picture macro standpoint an unaligned AI trying to calculate as many digits of Pi as possible would potentially look similar to a decentralized swarm of AI machines/ships that are trying to maximize utility for the humans they're caretaker for. Since both of them are essentially trying to maximize total compute over the lifetime of the universe.

From the standpoint of human-like minds you need to model some things differently. For one a post-biological civilization's inhabitants really only have two different forms of goods they care about:

Status/meaning is one form of scarcity: Since almost everyone wants to be above average, which is a statistical impossibility. You can fix this however by recognizing that people both have a demand for status, but they also provide it to others through what they recognize as worthwhile.

Meaning you can raise the supply of status until it totally satisfies demand by creating new people who still produce status for others, but don't draw upon the supply of zero sum status for themselves. With a slight addition this also solves the issue of demand for reproduction eventually outstripping supply. Since you can ensure these new people don't want to have kids with each other. Thus enabling the reproduction of the original people to have its demand totally satisfied without creating exponential growth that would eventually reverse this.

Novelty is another form of scarce good, but is so cheap to provide it takes a while to be relevant. It bears mentioning however, because it is still finite. So at a certain point to avoid exhausting novelty as a resource, you need to do something such as enhance your own mind slightly. That way you suddenly have access to a bunch more novelty you couldn't appreciate at your previous level of intelligence (the same way plenty of adults' entertainment would be boring or incomprehensible to a child). This means that you expect the amount of resources being used per simulated mind will slowly increase over time. There are alternatives to this of course, but they entail biting bullets I don't expect many people would find palatable. I also take it for granted that we or any aliens also produced by evolution will have a sense of self preservation, such that they aren't choosing to die if better alternatives exist. The consequences this has is that you want your resource gathering to greatly outstrip your population growth rate. So you can build up an ever larger surplus for future personal growth and for use when the universe is much colder.

Expand full comment
Nathan Smith's avatar

Why should we think that the concept of a utility function would apply to a post-biological civilization?

Human utility functions come partly from biology, and maybe partly from spirit or soul. Neither cross-applies to AI.

Can we theorize about AI while regarding it as a tool?

Expand full comment
Vakus Drake's avatar

What's funny is that I've heard people coming from a wildly different perspective than you, argue that actually utility functions are just a useful fiction used to oversimplify human behavior, and that only an AI could have a "real" utility function.

Since human behavior is kind of a hot mess driven by natural selection, that wasn't optimizing directly for philosophical consistency. Thus why people will usually answer philosophical thought experiments in ways that they couldn't justify if they had to think more about it.

The best example off the top of my head would be the difference between people's response to the regular trolley problem vs their response to the fat man on a bridge variety of trolley problem. Another good example would be how people often act as though they believe in an absurd principle that has been dubbed "The Copenhagen Interpretation of Ethics" (https://forum.effectivealtruism.org/posts/QXpxioWSQcNuNnNTy/the-copenhagen-interpretation-of-ethics), where they will sometimes conclude that behaving altruistically is actually morally worse than not getting involved at all. Even though their involvement made things better than they would otherwise be by any possible metric. The heuristics behind human behavior often need *a lot* of introspection to guide them into resembling a coherent utility function, especially when it comes to circumstances too weird for our evolutionary instincts to be well prepared for.

In contrast AI are generally expected to be somewhat monomaniacal in pursing a utility function. So I doubt you'd need something like behavioral economics to describe the ways it doesn't always act like a rational actor. Since being able to make a ranking of world states according to how well they satisfy a utility function is something that in principle doesn't actually need to require any internal experience.

A classic paperclipper with no internal experience would still have a utility function it used to rank potential world states, based on how many paperclips they would have in them. So that it can pick the option that according to its utility function maximizes paperclips.

I had the AI making a lot of decisions simply because that's going to be more efficient. Though with this abstract of analysis I doubt it would make any difference if the AI was acting as merely an advisor and proxy for its humans (provided people aren't so irrational they go against the honest advise of an AI that is both smarter, and knows them better than they know themselves). Since at a certain level of intelligence human involvement in the loop becomes essentially symbolic (provided the human is a rational economic agent).

Expand full comment