mouth-flappin

Phenomenal Shifts, Antienlightenment, The Buddha of Asininity

September 5th, 2010  |  Published in idiocy, mouth-flappin, vanity

I had a "moment" the other day. It was a roughly instantaneous, completely internal moment I'll refer to as a phenomenal shift. I have had a number of such moments in my life. They're really weird. I shall attempt here to describe the events, in the hopes that perhaps (a) you, the reader, have had such moments, and will optionally relay to me the existence of such moments; or (b) we can verify, at long last, that I've lost my mind.

These phenomenal shifts are mysteriously Big Things for me -- though not immediately interestingly causally efficacious per se, they haunt my general worldview in weird ways afterwards, coloring the way I think about Being, the way I tacitly cleave the ontic joints of the universe, &c, &c.

I'm really curious about whether this is an integral part of the human experience or whether none of you have any idea what I'm talking about.

What are you talking about?

The first phenomenal shift I experienced took place when I was a little kid -- six years of age, give or take. I took a look at myself in the mirror and realized, at that point, that I existed. It is as though, prior to that single moment of realization, I had been an automaton, interacting with the environment but not thinking too much about it. Very much like a little adorable Roomba, perhaps. Subsequently, I realized that I was a thing in the world (whereas previously I merely was one). I remember vividly being really freaked out by this.

That sounds idiotic.

Yeah, Strawman Interlocutor, you're right. Jeez. Let's see.

As a little kid, I hadn't read Husserl or Fichte or much of anything, so I couldn't put realizations into any sort of conceptual framework. All I could do was stare in the mirror and marvel at the fact that I existed. And I did so. I would sneak up to the mirror and stare into it for extended periods of time. Day after day. Marveling at the fact that I existed, that anything at all existed.

So?

So the crucial part of these "phenomenal shifts" is the single moment when you intuitively realize something big about Being, even if you can't immediately formally state exactly what that Something is. That's it. You're just minding your business and bam, you get hit over the head. With a Truth Pillowcase full of Reality Doorknobs.

I'm able, two decades later, to go back and couch this first, strange phenomenal shift in terms of my dubious oversimplification of some concepts put forth by Dead European philosophers. This is convenient, but not what I want to talk about here. The key, strange experience here is a sudden internalization of a fundamental part of existence -- namely, "I exist", or something along those lines.

So you're talking about Buddhic Enlightenment?

No.

No, you are. Siddhartha Gautama sat underneath a pipal tree until he had a single moment of enlightenment, in which he

Shut up, Strawman Interlocutor, just shut up.

What I am describing is basically a dumbed-down version of what Buddha had after hanging out under his Boddhi tree, except instead of a wonderful, windless flash of Universal truth, I just get confused.

It's just as sudden, though -- an instantaneous, incorporeal tidal wave of nonpropositional and unreasonably inscrutable information.

So you've had more of these things?

Why yes, Strawman Interlocutor, I have. I've had a few of them in my life. The realizations have been similar -- suddenly occurring big, vague, remarkably affecting thoughts, not immediately expressible in language.

The one I had a few days ago occurred while I was brushing my teeth. I was thinking about a number of humans I've known very well and bam, it hits me. Something along the lines of "you are a human being who exists, and your actions and personality are known extremely well to these other existent human beings, with whom you have had real interactions".

I forgot that I was brushing my teeth and just kinda stood there for a while. Probably ten seconds passed before I was able to get it together and move again.

It sounds idiotic, stupid, and completely banal. That's kind of the point I think.

So what, I don't care, so what. I don't care.

There is, of course, nothing odd about a thought or mental state not being immediately expressible in language. I mean to say nothing here concerning the Sapir-Whorf hypothesis, concerning Wittgenstein, concerning Jerry Fodor, really concerning anything profound.

Again, nothing I say here is meant to be "profound". What I am attempting to do is convey the properties of a class of powerfully moving mental events which I have experienced a number of times.

That was all far too lucid and comprehensible, Karl. Would you mind muddying the conceptual waters a bit

Look, this is a very real mental phenomenon which has taken place in my flesh-and-blood brain a number of times, and it's wigged me out, and I'm wondering if it happens to other people (i.e. it's a part of the human experience) or if it just happens to me (i.e. it's a part of the guy-who-has-to-paint-his-face-like-a-clown-every-night-before-he-goes-to-sleep experience).

Why I do not accept the Chinese Room Argument

June 18th, 2010  |  Published in mouth-flappin, noise-makin, philosophy, vanity

John Searle's Chinese room argument is a wonderful little argument which concludes that no artificial intelligence can possibly have true "understanding" of a human language (and thus, by implication, we cannot have real "strong artificial intelligence", as a strong AI must be at least as mentally adept as, say, the average human, who does understand language). I do not accept the argument. I will briefly describe why.

There are two important things to note before we get going. First, there are many wonderful and remarkably subtle replies to the CRA (and responses to those, and responses to the responses, etc.); I give only one of them here (it is the one which really convinces me of the falsity of the CRA). Second, none of the arguments here are my own, though I'm not quite sure where to give attribution.

The Chinese Room Argument

I won't rehash the argument here. If you wish to read on, you should understand it, however. The Wikipedia article has a nice readable statement of the argument you might want to check out.

One Reply

This isn't a head-on attack of the CRA; it is instead an argument which I find very compelling whose conclusion contradicts that of the CRA. You will have to wait, with bated breath, I am certain, for a critique of exactly what I believe to be wrong about the CRA.

Suppose we have Jim, who is a native Chinese speaker. Suppose that, for each cell in Jim's brain (and brainstem, and whatever else in the body may affect cognition), we have a tiny cell-sized and cell-shaped microcomputer, which has the same input and output as the cell (takes and emits the same molecules, connects to other cells in the same way, has the same properties as the original cell with respect to electrical current, etc.). The cell-microcomputer was designed in a lab and runs on silicon.

Suppose we replace one cell from Jim's brain with its corresponding microcomputer. Jim, at this point, feels the same and acts the same. He understands Chinese as well as a native speaker can (just as before the cell transplantation). He feels like the same person, he has the same cognitive abilities, etc.

Suppose we continue this process of cell replacement, cell by cell. At each point, Jim retains his understanding of Chinese (recall that we assumed that the microcomputers behave exactly identically to the cells; it follows, then, that Jim behaves exactly identically with the microcomputers as he does with the cells).

Eventually, Jim will have no original brain cells left. However, he will still converse with others in Chinese as a native speaker, with memories, intelligence, emotions, and world knowledge. He understands Chinese no better and no worse than he did before the operation.

The situation, then, is one in which an individual with a wholly robotic brain, made from silicon components fabricated in a clean room, understands Chinese.

If I have one neuron replaced and I feel like the exact same person and you tell me that I don't "actually" understand English because I'm not a real human any more, I will not be happy. If I have two neurons replaced and you tell me I don't understand Chinese, I will not be happy. The same holds after an arbitrary number of neuronal replacements.

We then simulate this wholly robotic brain on a single computer (we can certainly do this). The simulation behaves identically with respect to any interaction in Chinese. If you admit Jim understands Chinese after the operation, you surely also admit the simulation understands Chinese (unless your definition of "understand" is contrived -- something like "has sufficient knowledge encoded in neurons").

We encode this program in a human-readable format, print it in a book, and give it to Searle in his room. Bam! the Chinese room understands Chinese.

We thus reject the CRA by reductio ad absurdum.

The Response

Searle's response to this argument would be, I believe, that we should imagine that instead of running on silicon with electrical currents running through it, our electronic brain is implemented entirely in some system of buckets of water and pulleys in a very large room such that its operation gives us the same result as the computer (we can surely come up with such a system). That is, you give the system a sentence encoded somehow in buckets of water, the system works for a bit, and you get an encoded output. It is absurd to assert these buckets of water understand Chinese, therefore the reply to the CRA must be false.

This is, of course, a strawman argument, but I'm pretty sure Searle did give a response very much like this to the reply (or a similar one).

Why The Response Fails to Convince Me

In the situation given in the response, we ask whether it is possible that the complex interactions of many buckets filled with water and attached to various types of pulleys can give rise to an understanding of Chinese. Frankly, this strikes me as no less absurd than asserting that meat can give rise to this understanding. As a materialist, I certainly think that meat (a bunch of atoms comprising molecules, comprising more complex structures, comprising more complex organelles, comprising cells) can give rise to an understanding of Chinese.

That is, the response is, I believe, little more than a statement of the prima facie absurdity of the understanding of Chinese implemented in a system of buckets of water and pulleys; I am asserting that this is no less absurd than something we observe every day -- true understanding of language implemented in meat.

(Basically, my rebuttal is "multiple realizability creates monsters? So what." This is, in truth, extremely ham-fisted, but it is late and I am tired.)

Update: I might be wrong.