4 Comments

“whether Machines Can Think… is about as relevant as the question of whether Submarines Can Swim”

I suppose it begs the question, why does it matter? So long as AI is able to do valuable tasks for society in a way that mimics human capability, does it matter how it accomplishes this goal?

Expand full comment
author

Well, that was part of the point I was getting at with this essay.

(But looking at it from another angle, I think it is a very important philosophic question, to understand who we are, what consciousness is, and what thinking is.)

Expand full comment

Indeed. That is why I liked this essay.

Expand full comment

I think it matters in two clear ways.

One is a moral view of whether these machines deserve a variety of things we would offer humans (like, remuneration for the labor, to be treated as moral agents, and so on).

Two, is whether the output can be considered reliable under a variety of circumstances. Even in some early testing with current LLMs, we've found a whole lot of unreliable responses. As Jason has indicated, this makes sense if we understand how LLMs work. This is because they are not "thinking" in any sense by which we would say that a human is "thinking" when we evaluate a problem or a question. This lack of "thinking" is really what people worry about when they talk about "paperclip maximizers" who almost mindlessly follow the dictums of their programming. Even incredibly complex AIs could be stuck with no choice but to continue following their programming and maximizing those paperclips. If the AI could think, it could evaluate whether their response makes sense and even change their own goals. Otherwise it just runs the programming given it, even if that results in negative results for its creator or even itself.

Expand full comment