13 Comments

As always, I find a lot to agree with here. However, I have an issue with the core framing. I don't know who these people are who want no safety at all.

You say: "I understand why many who are in favor of technology, growth, and progress see safety as the enemy." Even those of us who are most strongly pro-progress with AI do not see safety as the enemy. As you note, technological progress has allowed us to steadily increase safety overall. There may be some who "dismiss the risks" but mostly it's about seeing risks more realistically and in context with the benefits.

Those of objecting to much of the recent pressure to centralize control of AI are concerned primarily about two things: 1. Too much safety (and at the wrong times). 2. The wrong kind of safety -- or safety measures.

Most of us on the "don't panic" side are not "downplaying the need for safety" -- if by that you mean portraying it as far lower than it is -- we are trying to get worriers, panickers, and doomers to see that they are probably playing up the risks too much. Especially right now, when LLMs are not superhuman AIs and have almost no control. We can improve our safety controls as things develop -- we cannot foresee ahead of time the details of effective and reasonable safety measures. If we stop everything until we have guaranteed "alignment", we will never proceed. The push is for too much safety too soon. Massive benefits from AI are not being properly weighed against risks.

What many of us are objecting to is not all safety measures, it's the wrong kind of safety measures, such as those being drafted in Europe. You noted the extremely baneful effect of regulation on nuclear power. We really, really should avoid the same heavy-handed, badly informed regulation that has all but killed nuclear power.

There is more to comment on, but I'll keep that for a post of my own.

Expand full comment

I'm quite pro technology but given some parts of the internet, the decline of women's right aggressively during COVID, wealth inequality, populism, state control etc..

I'm highly cautious when AI is produced in environments that are largely male dominated, money driven, and tend towards group think. I'm optimistic but highly highly skeptical because I intensely value my privacy. If its a tool, when has man used that tool not to dominate? Or given up power? That right seems to have diminished/disappeared in the last 2 decades.

Expand full comment

The "solutionism" you propose sounds a lot like old school central planning or technocracy, and will fail for the same reason those failed.

Expand full comment

The problem with attempting to do safety proactively, is that one can't tell ahead of time what the actual safety risks are.

Expand full comment

I am not seeing the debates framed as optimism v pessimism. I am seeing the debates as AI doom/hype v people talking about the actual harms in the training and implementation right now. Focusing on future potential harms is taking attention away from actual past and current harms: theft of IP, consolidation of power, amplification of bias, impacts on labor, environmental harms, etc. Any “solutionism” needs to actually address the current situation that big tech labs have put us in.

Expand full comment
Comment deleted
Expand full comment