The scary part isn’t AI outsmarting us — it’s AI outpacing us before we’ve figured out the job description.
We’re promised a seat on the board of the future, but let’s hope it’s not just a ceremonial title. Progress is great, as long as we’re not just rubber-stamping whatever the algorithms hand us.
📌 The future of work: less busywork, more “Are you still in charge?” moments.
⬖ Reviewing the org chart at Frequency of Reason: bit.ly/4jTVv69
This is a good series and it is good that you acknowledge both implicitly and occasionally explicitly ("Stepping up to this challenge is worth it") that humans are the constraint and they need to "step up." But that's the hard part, as I've been saying from my corner.
> AIs can provide checks and balances on other AIs. If one AI system writes code, another can run a suite of tests.
The "good AI's to catch bad AI's" approach.
If your AI's are 100% trustworthy, you don't need it.
If your AI's are 100% evil, it can't save you.
For some intermediate values of semi-trustworthy, usually helpful AI, then the good AI's can actually catch the bad AI's.
The biggest factor is technical details of the AI's programming.
Sure, society rearanging itself to keep AI's in check is kinda somewhat useful on the margin. But only on the margin.
You seem to have an implicit premise that humans adapt to the situation, humans come up with new ways of keeping the AI under control. But the AI doesn't, at least not too much. There is no serious AI effort to subvert the human control systems. No AI has hacked the global AI incident hotline. The safety logs that have teams of human experts reading them aren't fabricated. The AI incident fast response force isn't being kept busy with a string of staged "incidents" while the real plan happens elsewhere.
> So if you are skeptical of “superintelligence,” think instead of cheap, abundant, reliable, scalable intelligence—a transformation analogous to what we went through with energy and physical work in the Industrial Revolution.
No. What I'm skeptical about is that you think AI will only be as big as the industrial revolution.
I think superintelligence is possible, is likely, and that your underestimating it.
> To imagine the intelligence age, add to this 100 assistants performing cognitive labor for every human—managing our affairs, prototyping our ideas, prioritizing our correspondence, researching our questions.
That is something that kinda makes sense, but it requires the AI's to be some combination of dumb and placid for the humans to stay in control.
> We should question and push back on AIs, making them justify their advice.
Your not getting how smart AI can be. Imagine a top string theorist, trying to justify their choice of gauge invariant (or some other very abstract piece of advanced maths), to the average 6 year old. With superhuman AI, they can make decisions far too complicated for us to understand. There does not need to be an explanation that fits in a human brain.
But it's worse than that. If the AI knows every psycological trick in the book and is superhumanly manipulative and good at lying, it can probably convince almost anyone that the moon is made of green cheese.
An honest superintelligence will mostly be saying "it's too complicated, you wouldn't understand". (Or an "explanation" so simplified as to be rubbish. See 90% of pop sci quantum mechanics) Whereas a dishonest AI can come up with all sorts of plausible lies.
Take your view. Take "AI will quickly become omnipotent". And average the 2.
There absolutely is an AI algorithm which, if started today, would result in all humans becoming uploaded minds running on a dyson sphere within a week.
Thanks for writing such a clarifying article. One question that intrigues me is if this feedback loop of acceleration continues, what happens to systems that were designed for an old infrastructure .. like education, governance, even our sense of community and belonging? Do we keep scaling on top of that system ... or something else will emerge?
The scary part isn’t AI outsmarting us — it’s AI outpacing us before we’ve figured out the job description.
We’re promised a seat on the board of the future, but let’s hope it’s not just a ceremonial title. Progress is great, as long as we’re not just rubber-stamping whatever the algorithms hand us.
📌 The future of work: less busywork, more “Are you still in charge?” moments.
⬖ Reviewing the org chart at Frequency of Reason: bit.ly/4jTVv69
This is a good series and it is good that you acknowledge both implicitly and occasionally explicitly ("Stepping up to this challenge is worth it") that humans are the constraint and they need to "step up." But that's the hard part, as I've been saying from my corner.
Never said it would be easy!
> AIs can provide checks and balances on other AIs. If one AI system writes code, another can run a suite of tests.
The "good AI's to catch bad AI's" approach.
If your AI's are 100% trustworthy, you don't need it.
If your AI's are 100% evil, it can't save you.
For some intermediate values of semi-trustworthy, usually helpful AI, then the good AI's can actually catch the bad AI's.
The biggest factor is technical details of the AI's programming.
Sure, society rearanging itself to keep AI's in check is kinda somewhat useful on the margin. But only on the margin.
You seem to have an implicit premise that humans adapt to the situation, humans come up with new ways of keeping the AI under control. But the AI doesn't, at least not too much. There is no serious AI effort to subvert the human control systems. No AI has hacked the global AI incident hotline. The safety logs that have teams of human experts reading them aren't fabricated. The AI incident fast response force isn't being kept busy with a string of staged "incidents" while the real plan happens elsewhere.
> So if you are skeptical of “superintelligence,” think instead of cheap, abundant, reliable, scalable intelligence—a transformation analogous to what we went through with energy and physical work in the Industrial Revolution.
No. What I'm skeptical about is that you think AI will only be as big as the industrial revolution.
I think superintelligence is possible, is likely, and that your underestimating it.
> To imagine the intelligence age, add to this 100 assistants performing cognitive labor for every human—managing our affairs, prototyping our ideas, prioritizing our correspondence, researching our questions.
That is something that kinda makes sense, but it requires the AI's to be some combination of dumb and placid for the humans to stay in control.
> We should question and push back on AIs, making them justify their advice.
Your not getting how smart AI can be. Imagine a top string theorist, trying to justify their choice of gauge invariant (or some other very abstract piece of advanced maths), to the average 6 year old. With superhuman AI, they can make decisions far too complicated for us to understand. There does not need to be an explanation that fits in a human brain.
But it's worse than that. If the AI knows every psycological trick in the book and is superhumanly manipulative and good at lying, it can probably convince almost anyone that the moon is made of green cheese.
An honest superintelligence will mostly be saying "it's too complicated, you wouldn't understand". (Or an "explanation" so simplified as to be rubbish. See 90% of pop sci quantum mechanics) Whereas a dishonest AI can come up with all sorts of plausible lies.
Take your view. Take "AI will quickly become omnipotent". And average the 2.
There absolutely is an AI algorithm which, if started today, would result in all humans becoming uploaded minds running on a dyson sphere within a week.
Thanks for writing such a clarifying article. One question that intrigues me is if this feedback loop of acceleration continues, what happens to systems that were designed for an old infrastructure .. like education, governance, even our sense of community and belonging? Do we keep scaling on top of that system ... or something else will emerge?