9 Comments
User's avatar
trey daniel's avatar

The Winecoff Hotel in Atlanta advertised itself as “absolutely fireproof”. Instead, it became the deadliest hotel fire in American history when it burned in 1946. (Maybe even had the world’s deadliest hotel fire record for a while?) A number of safety regulations naturally resulted.

Highly recommend the book The Winecoff Fire by Sam Harris and Allen Goodwin. Not so much for a history of hotel & building fire regs, though that is certainly covered, but through authors ability to provide the story of all the hotel guests at the time. As well as how some families found out their children, parents, etc were victims, or might be, and how they responded. It was significantly more relatable than I would have expected.

Joshua Delos Reyes's avatar

"Safety will be created gradually, incrementally..." I agree. The huge challenge now is for us to be quick enough to come up with the needed layers of defense (especially when it comes to AI and biotech). It's a huge challenge because the impact of rapid developments in AI and biotech can be instantly global, in contrast to the past developments which were slow and local at first.

Josh Holder's avatar

I echo this take — I think at the end of the day you’ll always need defense in depth. The story of Waymo is also a nice case study of this: https://open.substack.com/pub/joshholder/p/safety-without-understanding-lessons?r=ksci&utm_medium=ios&shareImageVariant=overlay

Jack CRAWFORD's avatar

As an ex volunteer firefighter many years ago I learned a lot from your article. I recently realized that we weren't even given training on what to do when anybody smelled natural gas anywhere.

Neural Foundry's avatar

The defense-in-depth framing really lands here. What gets missed in the "find the silver bullet" approach is how solutions need to coevolve with each other. Fire hydrants only work if nozzle design matches pressure requirements, which only matters if response time is fast enough. That interdependence is what creates resiliance, not individual components. Reminds me of systems thinking in enginering where failure modes get discovered at integration points, not individual subsytems.

Kevin's avatar

This is an interesting idea. Maybe this is a bit of a tangent, but so many of the danger scenarios for unsafe AI involve the AI hacking into some computer system. The general problem of "making an AI safe" might be intractable, but this narrow sub-case of "make a computer system that is unhackable by any AI" seems a lot more tractable. We have concepts like sandboxing, firewalls, and virtual machines, but they have errors over time, and we do have some theoretical-but-still-impractical methods of proving they have zero bugs, rather than squashing bug after bug. I feel like we should go in that direction....

Timber Stinson-Schroff's avatar

They should have aligned the artificial combustion

Rainbow Roxy's avatar

Fascinating. Your point about defense in depth and orchestrating solutions for safety really rezonates. I'd be curious to hear more about how this framework applies to contemporany, complex risks, like in cybersecurity or AI safety.

Shoni's avatar

Interesting history! Nowadays we think of forest fires as the biggest issue and consider ourselves relatively safe in the cities. Cool to see how that came to be so.

With AI, it's maybe a bit more complicated because we have such reduced understanding of its nature compared to fire.