13 Comments

As always, I find a lot to agree with here. However, I have an issue with the core framing. I don't know who these people are who want no safety at all.

You say: "I understand why many who are in favor of technology, growth, and progress see safety as the enemy." Even those of us who are most strongly pro-progress with AI do not see safety as the enemy. As you note, technological progress has allowed us to steadily increase safety overall. There may be some who "dismiss the risks" but mostly it's about seeing risks more realistically and in context with the benefits.

Those of objecting to much of the recent pressure to centralize control of AI are concerned primarily about two things: 1. Too much safety (and at the wrong times). 2. The wrong kind of safety -- or safety measures.

Most of us on the "don't panic" side are not "downplaying the need for safety" -- if by that you mean portraying it as far lower than it is -- we are trying to get worriers, panickers, and doomers to see that they are probably playing up the risks too much. Especially right now, when LLMs are not superhuman AIs and have almost no control. We can improve our safety controls as things develop -- we cannot foresee ahead of time the details of effective and reasonable safety measures. If we stop everything until we have guaranteed "alignment", we will never proceed. The push is for too much safety too soon. Massive benefits from AI are not being properly weighed against risks.

What many of us are objecting to is not all safety measures, it's the wrong kind of safety measures, such as those being drafted in Europe. You noted the extremely baneful effect of regulation on nuclear power. We really, really should avoid the same heavy-handed, badly informed regulation that has all but killed nuclear power.

There is more to comment on, but I'll keep that for a post of my own.

Expand full comment

Good point. Maybe I should have said that many see *concern* about safety as the enemy.

And I agree that it would be bad to centralize control of AI, or to overly regulate it, or to stop (or “pause”) development.

Expand full comment

I'm quite pro technology but given some parts of the internet, the decline of women's right aggressively during COVID, wealth inequality, populism, state control etc..

I'm highly cautious when AI is produced in environments that are largely male dominated, money driven, and tend towards group think. I'm optimistic but highly highly skeptical because I intensely value my privacy. If its a tool, when has man used that tool not to dominate? Or given up power? That right seems to have diminished/disappeared in the last 2 decades.

Expand full comment
Comment deleted
Jun 10, 2023
Comment deleted
Expand full comment

I'm quite sure I never implied state control was a favorable option. You have populists in all sectors including tech with some very disturbing views of the world like open censorship. My assessment is people with power, any people in any capacity tend not to give up power and certainly don't only use it for the benefit of others. Your choice of the word weapon is quite telling. I'm not assessing Ai as a weapon but have often called naive.

The state exists and should exist in some limited capacity. Given that, IF the individuals we are electing have the capacity to regulate they should. It should be a collaborative process producing the best outcomes.

Expand full comment

The "solutionism" you propose sounds a lot like old school central planning or technocracy, and will fail for the same reason those failed.

Expand full comment

Why do you assume the solutions must be centralized? I didn't say that and didn't mean to imply it.

Expand full comment

It's implicit in your use of the collectivist plural pronoun.

> The best path forward, both for humanity and for the political battle, is to acknowledge the risks, help to identify them, and come up with a plan to solve them. How do **we** develop safe AI? And how do **we** develop AI safely?

I'm not sure who you intended by "we", for all I know you have a tapeworm, but in practice it would always boil down to at best several "elite solutionists" and more likely a bunch of government bureaucrats.

Expand full comment

I see. That's not how I meant it. I just meant that in a generic sense, as in: how can safe AI be developed? How can AI be developed safely? I didn't intend, and don't expect, that it will be done in centralized fashion.

Expand full comment

Ok, that's the collectivist passive voice, which is just as bad.

> I didn't intend, and don't expect, that it will be done in centralized fashion.

Well, all the AI safetyists believe in a centralized approach. Eliezer has even suggested nukes be used against countries that refuse to take action against "rogue developers".

Expand full comment

The problem with attempting to do safety proactively, is that one can't tell ahead of time what the actual safety risks are.

Expand full comment

I am not seeing the debates framed as optimism v pessimism. I am seeing the debates as AI doom/hype v people talking about the actual harms in the training and implementation right now. Focusing on future potential harms is taking attention away from actual past and current harms: theft of IP, consolidation of power, amplification of bias, impacts on labor, environmental harms, etc. Any “solutionism” needs to actually address the current situation that big tech labs have put us in.

Expand full comment
Comment deleted
Jun 10, 2023
Comment deleted
Expand full comment

Tried to embed a graphic, but it didn't work; there's a nice shorthand for a bit more detail of what I'm talking about (link below).

The IP issue easy to dismiss if you aren't an artist. Saying "well, the company just stole a lot of stuff so we just have to live with it" is a pretty irresponsible approach--and remember, the conversation about hypothetical future harms is being led by the same people who committed that theft. Saying "problems existed before and if the regulation isn't good enough to reduce harms" is similarly irresponsible in my opinion--especially since the bias is hidden in the dataset, which isn't visible to users or regulators. Putting companies in charge of mitigating future harms that haven't been attentive to current harms doesn't seem like good planning, and yet who is at the table, nationally and internationally, to mitigate imagined future harms? The CEOs of OpenAI, Anthropic, etc. So. That's some of why some people are pretty worked up about the focus on *hypothetical* futures (for capabilities that don't yet exist) v. past and current harms.

https://www.universityaffairs.ca/opinion/in-my-opinion/chatgpt-we-need-to-talk-about-llms/

Expand full comment
Comment deleted
Jun 10, 2023
Comment deleted
Expand full comment

Hmm, I don't consider myself to be on the “safetyist” side! I'm trying to say that there is a false dichotomy between safetyism and anti-safetyism.

For instance, a lot of work that was done on nuclear safety was very good! Containment units and other safety engineering are part of the reason why there were no casualties from Three Mile Island. What was bad was adopting a review-and-approval model for nuclear that had no incentive to balance costs and benefits.

For AI, I want to avoid a regulatory agency that would do what the AEC/NRC did to nuclear, but I want effective safety engineering around the systems we build.

Expand full comment