Archive for June, 2010

Why I do not accept the Chinese Room Argument

June 18th, 2010  |  Published in mouth-flappin, noise-makin, philosophy, vanity

John Searle's Chinese room argument is a wonderful little argument which concludes that no artificial intelligence can possibly have true "understanding" of a human language (and thus, by implication, we cannot have real "strong artificial intelligence", as a strong AI must be at least as mentally adept as, say, the average human, who does understand language). I do not accept the argument. I will briefly describe why.

There are two important things to note before we get going. First, there are many wonderful and remarkably subtle replies to the CRA (and responses to those, and responses to the responses, etc.); I give only one of them here (it is the one which really convinces me of the falsity of the CRA). Second, none of the arguments here are my own, though I'm not quite sure where to give attribution.

The Chinese Room Argument

I won't rehash the argument here. If you wish to read on, you should understand it, however. The Wikipedia article has a nice readable statement of the argument you might want to check out.

One Reply

This isn't a head-on attack of the CRA; it is instead an argument which I find very compelling whose conclusion contradicts that of the CRA. You will have to wait, with bated breath, I am certain, for a critique of exactly what I believe to be wrong about the CRA.

Suppose we have Jim, who is a native Chinese speaker. Suppose that, for each cell in Jim's brain (and brainstem, and whatever else in the body may affect cognition), we have a tiny cell-sized and cell-shaped microcomputer, which has the same input and output as the cell (takes and emits the same molecules, connects to other cells in the same way, has the same properties as the original cell with respect to electrical current, etc.). The cell-microcomputer was designed in a lab and runs on silicon.

Suppose we replace one cell from Jim's brain with its corresponding microcomputer. Jim, at this point, feels the same and acts the same. He understands Chinese as well as a native speaker can (just as before the cell transplantation). He feels like the same person, he has the same cognitive abilities, etc.

Suppose we continue this process of cell replacement, cell by cell. At each point, Jim retains his understanding of Chinese (recall that we assumed that the microcomputers behave exactly identically to the cells; it follows, then, that Jim behaves exactly identically with the microcomputers as he does with the cells).

Eventually, Jim will have no original brain cells left. However, he will still converse with others in Chinese as a native speaker, with memories, intelligence, emotions, and world knowledge. He understands Chinese no better and no worse than he did before the operation.

The situation, then, is one in which an individual with a wholly robotic brain, made from silicon components fabricated in a clean room, understands Chinese.

If I have one neuron replaced and I feel like the exact same person and you tell me that I don't "actually" understand English because I'm not a real human any more, I will not be happy. If I have two neurons replaced and you tell me I don't understand Chinese, I will not be happy. The same holds after an arbitrary number of neuronal replacements.

We then simulate this wholly robotic brain on a single computer (we can certainly do this). The simulation behaves identically with respect to any interaction in Chinese. If you admit Jim understands Chinese after the operation, you surely also admit the simulation understands Chinese (unless your definition of "understand" is contrived -- something like "has sufficient knowledge encoded in neurons").

We encode this program in a human-readable format, print it in a book, and give it to Searle in his room. Bam! the Chinese room understands Chinese.

We thus reject the CRA by reductio ad absurdum.

The Response

Searle's response to this argument would be, I believe, that we should imagine that instead of running on silicon with electrical currents running through it, our electronic brain is implemented entirely in some system of buckets of water and pulleys in a very large room such that its operation gives us the same result as the computer (we can surely come up with such a system). That is, you give the system a sentence encoded somehow in buckets of water, the system works for a bit, and you get an encoded output. It is absurd to assert these buckets of water understand Chinese, therefore the reply to the CRA must be false.

This is, of course, a strawman argument, but I'm pretty sure Searle did give a response very much like this to the reply (or a similar one).

Why The Response Fails to Convince Me

In the situation given in the response, we ask whether it is possible that the complex interactions of many buckets filled with water and attached to various types of pulleys can give rise to an understanding of Chinese. Frankly, this strikes me as no less absurd than asserting that meat can give rise to this understanding. As a materialist, I certainly think that meat (a bunch of atoms comprising molecules, comprising more complex structures, comprising more complex organelles, comprising cells) can give rise to an understanding of Chinese.

That is, the response is, I believe, little more than a statement of the prima facie absurdity of the understanding of Chinese implemented in a system of buckets of water and pulleys; I am asserting that this is no less absurd than something we observe every day -- true understanding of language implemented in meat.

(Basically, my rebuttal is "multiple realizability creates monsters? So what." This is, in truth, extremely ham-fisted, but it is late and I am tired.)

Update: I might be wrong.