Have you been influenced by my writing? Have I changed your mind on anything, added a new term to your vocabulary, or inspired you to start a project? I’d love to hear the details! Please comment, reply, or email me with a testimonial.
In this update:
Full text below. If it gets cut off in your email, click the headline above to read on web.
Do we get better or worse at adapting to change?
Verner Vinge, in a classic 1993 essay, described “the Singularity” as an era where progress becomes “an exponential runaway beyond any hope of control.”
The idea that technological change might accelerate to a pace faster than we can keep up with is a common concern. Almost three decades earlier, Alvin Toffler coined the term “future shock”, defining it as “the dizzying disorientation brought on by the premature arrival of the future”:
I believe that most human beings alive today will find themselves increasingly disoriented and, therefore, progressively incompetent to deal rationally with their environment. I believe that the malaise, mass neurosis, irrationality, and free-floating violence already apparent in contemporary life are merely a foretaste of what may lie ahead unless we come to understand and treat this psychological disease….
Change is avalanching down upon our heads and most people are utterly unprepared to cope with it….
… we can anticipate volcanic dislocations, twists and reversals, not merely in our social structure, but also in our hierarchy of values and in the way individuals perceive and conceive reality. Such massive changes, coming with increasing velocity, will disorient, bewilder, and crush many people.
(Emphasis added. Toffler later elaborated on this idea in a book titled Future Shock.)
Change does indeed come ever faster. But most commentary on this topic assumes that we will therefore find it ever more difficult to adapt.
Is that actually what has happened over the course of human history? At first glance, it seems to me that we have actually been getting better at adapting, even relative to the pace of change.
Some examples
Our Stone Age ancestors, in nomadic hunter-gatherer tribes, had very little ability to adapt to change. Change mostly happened very slowly, but flood, drought, or climate change could dramatically impact their lives, with no option but to wander in search of a better land.
Mediterranean kingdoms in the Bronze Age had much more ability to adapt to change than prehistoric tribes. But they were unable to handle the changes that led to the collapse of that civilization in the 12th century BC. No civilizational collapse on that level has happened since the Dark Ages.
The printing press ultimately helped amplify the theological conflict that led to over a century of religious wars; evidently, 16th-century Europe found it very difficult to adapt to a new ability for ideas to spread. The Internet has certainly created some social turmoil, and we’re only about 30 years into it, but so far I think its negative impact is on track to be less than a hundred years of war engulfing a continent.
In the 1840s, when blight hit the Irish potato, it caused a million deaths, and another million emigrated, causing Ireland to lose a total of a quarter of its population, from which it has still not recovered. Has any modern event caused any comparable population loss in any developed country?
In 1918, when an influenza pandemic hit, the world had much less ability to adapt to that change than we did in 2020 when covid hit.
In the 20th century, people thrown out of work read classified ads in the newspapers or went door-to-door looking for jobs. Today, they pick up an app and sign up for gig work.
What about occupational hazards from dangerous substances? Matches using white phosphorus, invented in 1830, caused necrosis of the jaw in factory workers, but white phosphorus was not widely banned until 1912, more than 80 years later. Contrast this with radium paint, which was used to make glow-in-the-dark dials since about 1914; this also caused jaw necrosis. I can’t find exactly when radium paint was phased out, but it seems to have been by 1960 or maybe 1970; so at most 56 years, faster than we reacted to phosphorus. (If we went back further to look at occupational hazards that existed in antiquity, such as smoke inhalation or lead exposure, I think we would find that they were not addressed for centuries.)
These are just some examples I came up with off the top of my head; I haven’t done a full survey and I may be affected by confirmation bias. Are there good counterexamples? Or a more systematic treatment of this question?
Why we get better at adapting to change
The concern about change happening faster than we can adapt seems to assume that our adaptation speed is fixed. But it’s not. Our adaptation speed increases, along with the speed of other types of change. There are at least two reasons:
First, detection. We have a vast scientific apparatus constantly studying all manner of variables of interest to us—so that, for instance, when new chemicals started to deplete the ozone layer, we detected the change and forecast its effects before widespread harm was done. At no prior time in human history would this have been possible.
Second, response. We have an internet to spread important news instantly, and a whole profession, journalists, who consider it their sacred duty to warn the public of impending dangers, especially dangers from technology and capitalism. We have a transportation network to mobilize people and cargo and rush them anywhere on the globe they are needed. We have vast and flexible manufacturing capacity, powered by a robust energy supply chain. All of this creates enormous resilience.
Solutionism, not complacency, about adaptation
Even if I’m right about the trend so far, there is no guarantee that it will continue. Maybe the pace of change will accelerate more than our ability to adapt in the near future. But I now think that if that happened, it would be the reversal of a historical trend, rather than an exacerbation of an already-increasing problem.
I am still sympathetic to the point that adaptation is always a challenge. But now I see progress as helping us meet that challenge, as it helps us meet all challenges.
Toffler himself seemed to agree, ending his essay on a solutionist note:
Man’s capacity for adaptation may have limits, but they have yet to be defined. … modern man should be able to traverse the passage to postcivilization. But he can accomplish this grand historic advance only if he forms a better, clearer, stronger conception of what lies ahead.
Amen.
Original post: https://rootsofprogress.org/adapting-to-change
Four lenses on AI risks
All powerful new technologies create both benefits and risks: cars, planes, drugs, radiation. AI is on a trajectory to become one of the most powerful technologies we possess; in some scenarios, it becomes by far the most powerful. It therefore will create both extraordinary benefits and extraordinary risks.
What are the risks? Here are several lenses for thinking about AI risks, each putting AI in a different reference class.
As software
AI is software. All software has bugs. Therefore AI will have bugs.
The more complex software is, and the more poorly we understand it, the more likely it is to have bugs. AI is so complex that it cannot be designed, but only “trained”, which means we understand it very poorly. Therefore it is guaranteed to have bugs.
You can find some bugs with testing, but not all. Some bugs can only be found in production. Therefore, AI will have bugs that will only be found in production.
We should think about AI as complicated, buggy, code, especially to the extent that it is controlling important systems (vehicles, factories, power plants).
As a complex system
The behavior of a complex system is highly non-linear, and it is difficult (in practice impossible) to fully understand.
This is especially true of the system’s failure modes. A complex system, such as the financial system, can seem stable but then collapse quickly and with little warning.
We should expect that AI systems will be similarly hard to predict and could easily have similar failure modes.
As an agent with unaligned interests
Today’s most advanced AIs—chatbots and image generators—are not autonomous agents with goal-directed behavior. But such systems will inevitably be created and deployed.
Anytime you have an agent acting on your behalf, you have a principal–agent problem: the agent is ultimately pursuing their goals, and it can be hard to align those goals with your own.
For instance, the agent may tell you that it is representing your interests while in truth optimizing for something else, like a demagogue who claims to represent the people while actually seeking power and riches.
Or the agent can obey the letter of its goals while violating the spirit, by optimizing for its reward metrics instead of the wider aims those metrics are supposed to advance. An example would be an employee who aims for promotion, or a large bonus, at the expense of the best interests of the company. Referring back to the first lens, AI as software: computers always do exactly what you tell them, but that isn’t always exactly what you want.
Related: any time you have a system of independent agents pursuing their own interests, you need some rules for how they behave to prevent ruinous competition. But some agents will break the rules, and no matter how much you train them, some will learn “follow these rules” and others will simply learn “don’t get caught.”
People already do all of these things: lie, cheat, steal, seek power, game the system. In order to counteract them, we have a variety of social mechanisms: laws and enforcement, reputation and social stigma, checks and balances, limitations on power. At minimum, we shouldn’t give AI any more power or freedom, with any less scrutiny, than we would give a human.
As a separate, advanced culture or species
In the most catastrophic hypothesized AI risk scenarios, the AI acts like a far more advanced culture, or a far more intelligent species.
In the “advanced culture” analogy, AI is like the expansionary Western empires that quickly dominated all other cultures, even relatively advanced China. (This analogy has also been used to hypothesize what would happen on first contact with an advanced alien species.) The best scenario here is that we assimilate into the advanced culture and gain its benefits; the worst is that we are enslaved or wiped out.
In the “intelligent species“ analogy, the AI is like humans arriving on the evolutionary scene and quickly dominating Earth. The best scenario here is that we are kept like pets, with a better quality of life than we could achieve for ourselves, even if we aren’t in control anymore; the worst is that we are exploited like livestock, exterminated like pests, or simply accidentally driven extinct through neglect.
These scenarios are an extreme version of the principal-agent problem, in which the agent is far more powerful than the principal.
How much you are worried about existential risk from AI probably depends on how much you regard these scenarios as “far-fetched” vs. “obviously how things will play out.”
I don’t yet have solutions for any of these, but I find these different lenses useful both to appreciate the problem and take it seriously, and to start learning from the past in order to find answers.
I think these lenses could also be useful to help find cruxes in debates. People who disagree about AI risk might disagree about which of these lenses they find plausible or helpful.
Original post: https://rootsofprogress.org/four-lenses-on-ai-risks
Links and tweets
The Progress Forum
Wizards and prophets of AI. I posted this for comment, then decided to rewrite it, then ended up posting the core argument of the new essay as a Twitter thread, then got replies to that thread, and now I don’t know what to write anymore. I’ll post something about this on the blog at some point, but if you want my half-baked, outdated thoughts, you can read those links
AMA: Mark Khurana, author of The Trajectory of Discovery: What Determines the Rate and Direction of Medical Progress?
Opportunities
Loyal (longevity) hiring a full stack software engineer (via @celinehalioua). Also they are looking for founders/execs to speak at their onsite
Nat Friedman wants to meet people who are doing technical alignment work
News
RIP Gordon Moore. “We at Intel remain inspired by Moore’s Law and intend to pursue it until the periodic table is exhausted.” Gordon’s 1965 paper: “Integrated circuits will lead to such wonders as home computers… and personal portable communications equipment.” Also, the story of the microprocessor
Last Energy to sell 24 small modular nuclear reactors to UK, $5/W (via @Atomicrod)
Law firm Cooper & Kirk accuses regulators of an attack on crypto
Announcements
Discord for discussion of futuristic tech: nanotech, longevity, etc. (via @kanzure)
ChatGPT plugins (via @sama and @gdb). Example: processing a video clip
Interviews
AI
Bill Gates: AI is the most impressive technology since the GUI (via @BillGates)
Scott Aaronson on AI risk (or see my excerpt)
How do you integrate your API with an AI? You just give it your docs
A GPT prompt: “no-nonsense teacher with an ambitious, self-directed student”. And another GPT prompt, to cut the caveats and pleasantries
GPT has trouble counting backwards (or maybe it’s just Markdown?)
Other links
Lithium, once expensive, is cheap again. Hooray for markets (via @scottlincicome)
The evidence for smartphones / social media harming kids’ mental health is weak
Queries
Quotes
“The shapes arise!” Walt Whitman and the poetry of progress. Also, Hart Crane
The evolution of our buildings: from exoskeletons to internal skeletons
The use of mathematics in biology was important as early as 1616
“A crisis of values confronted liberals in the mid-thirties”
“What matters in the long run is whether growth is sustained”
In 2006, Steven Johnson predicted what happened during covid
More tweets
A positive vision for biology: “what we’ll create and do once all disease is gone”
Twitter created a socially acceptable way to publish a single sentence
"In 1918, when an influenza pandemic hit, the world had much less ability to adapt to that change than we did in 2020 when covid hit."
You highlighted something that I often think about. Some point to the global internationalization of travel as a cause of the pandemic and likely cause of more frequent future pandemics, but new technology has also made us better at detecting and mitigating pandemics as well.
Imagine Covid-19 in a pre-internet era, without the ability to work from home...the economy would have taken a much more significant blow, and the loss of life likely greater.
Any new technology, from jets to mRNA vaccines, brings with it challenges. The key is always being one step ahead, solving more problems than we create. So far, we have been able to do this, at least since the industrial revolution.
I'm not even sure how I signed up to this newsletter. I think it was recommended by one of the AI newsletters I signed up for and got auto signed up to it. With that out of the way, I want to say - I'M GLAD I'M SUBBED TO THIS NEWSLETTER!
I am glad to see someone talking about how human progress has been an ongoing story. The four lenses you suggested are thought-provoking and will be a part of my thought-process on AI going forward. The way you have classified the links at the end is super engaging and useful. Thank you for this awesome edition and it will definitely motivate me when I sit to write the next edition of my newsletter on AI.
You have a new fan, sir!