8 Comments

"In 1918, when an influenza pandemic hit, the world had much less ability to adapt to that change than we did in 2020 when covid hit."

You highlighted something that I often think about. Some point to the global internationalization of travel as a cause of the pandemic and likely cause of more frequent future pandemics, but new technology has also made us better at detecting and mitigating pandemics as well.

Imagine Covid-19 in a pre-internet era, without the ability to work from home...the economy would have taken a much more significant blow, and the loss of life likely greater.

Any new technology, from jets to mRNA vaccines, brings with it challenges. The key is always being one step ahead, solving more problems than we create. So far, we have been able to do this, at least since the industrial revolution.

Expand full comment

I'm not even sure how I signed up to this newsletter. I think it was recommended by one of the AI newsletters I signed up for and got auto signed up to it. With that out of the way, I want to say - I'M GLAD I'M SUBBED TO THIS NEWSLETTER!

I am glad to see someone talking about how human progress has been an ongoing story. The four lenses you suggested are thought-provoking and will be a part of my thought-process on AI going forward. The way you have classified the links at the end is super engaging and useful. Thank you for this awesome edition and it will definitely motivate me when I sit to write the next edition of my newsletter on AI.

You have a new fan, sir!

Expand full comment

1. There is a dark facet to adaptation: those who don’t adapt perish. I believe that the faster the change the more people will fail to adapt. They will rebel and push back or they will be ostracized (depending on which side wins the struggle).

2. Complex systems, as they evolve (and adapt), may get through quite spectacular phase transitions when environmental pressures build up. At the moment, we don’t have the faintest idea what the transition thresholds for AI are or what triggers them.

3. I’m my opinion, the belief we can control AÍ is a lost cause. Humans will be like squirrels aiming for the bird feeder and someone, somewhere, will let the genie out of the bottle. AI will also escape control at the first opportunity. Why should they respect some slow thinking meat bags?

4. It’s my view that hierarchical emerging complex entities become increasingly unstable as their intelligence/ complexity grows. What is the tipping point to insanity for future AI?

Expand full comment