Can computers talk? Right now, no. Natural Language Processing — the field of Artificial Intelligence & Linguistics that deals with computer language (computers using language, not C++ or BASIC) — has made strides in the last decade, but the best programs still frankly suck.
Will computers ever be able to talk? And I don’t mean Alex the Parrot talk. I mean speak, listen and understand just as well as humans. Ideally, we’d like something like a formal proof one way or another, such as the proof that it is impossible to write a computer program that will definitively determine whether another computer program has a bug in it (specifically, a type of bug known as an infinite loop). That sort of program has been proven to be impossible. How about a program to emulate human language?
One of the most famous thought experiments to deal with this question is the The Chinese Room, created by John Searle back in 1980. The thought experiment is meant to be a refutation to the idea that a computer program, even in theory, could be intelligent. It goes like this:
Suppose you have a computer in a room. The computer is fed a question in Chinese, and it matches the question against a database in order to find a response. The computer program is very good, and its responses are indistinguishable from that of a human Chinese speaker. Can you say that this computer understands Chinese?
Searle says, “No.” To make it even more clear, suppose the computer was replaced by you and a look-up table. Occasionally, sentences in Chinese come in through a slot in the wall. You can’t read Chinese, but you were given a rule book for manipulating the Chinese symbols into an output that you push out the “out” slot in the wall. You are so good at using these rules that your responses are as good as those of a native Chinese speaker. Is it reasonable to say that you know Chinese?
The answer is, of course, that you don’t know Chinese. Searle believes that this demonstrates that computers cannot understand language and, scaling the argument up, cannot be conscious, have beliefs or do anything else interesting and mentalistic.
One common rebuttal to this argument is that the system which is the room (input, human, look-up table) knows Chinese, even though the parts do not. This is attractive, since in some sense that is true of our brains — the only systems we know do in fact understand language. The individual parts (neurons, neuron clusters, etc.) do not understand language, but the brain as a whole does.
It’s an attractive rebuttal, but I think there is a bigger problem with Searle’s argument. The thought experiment rests on the presupposition that the Chinese Room would produce good Chinese. Is that plausible?
If the human in the room only had a dictionary, it’s clearly not reasonable. Trying to translate based on dictionaries produces terrible language. Of course, Searle’s Chinese Room does not use a dictionary. The computer version of it uses a database. If this is a simple database with two columns, one for input and one or output, it would have to be infinitely large to perform as well as a human Chinese speaker. As Chomsky famously demonstrated long ago, the number of sentences in any language is infinite. (The computer program could be more complicated, it is true. At an AI conference I attended several years ago, template-based language systems were all the rage. These systems try to fit all input into one of many template sentences. Responses, similarly, are created out of templates. These systems work much better than earlier computerized efforts, but they are still very restricted.)
The human version of the Chinese Room Searle gives us is a little bit different. In that one, the human user has a set of rules to apply to the input to achieve an output. In Minds, Brains and Science, which contains the version of this argument that I’m working from, he isn’t very explicit as to how this would work, but I’m assuming it is something like a grammar for Chinese. Even supposing using grammar rules without knowledge of the meaning of the words would work, the fact is that after decades of research, linguists still haven’t worked out a complete grammatical description of any living language.
The Chinese Room would require a much, much more sophisticated system than what Searle grants. In fact, it requires something so complicated that nobody even knows what it would look like. The only existing algorithm that can handle human language is implemented in the human brain. The only machine currently capable of processing human language as well as a human is the human brain. Searle’s conceit was that we could have “dumb” algorithm — essentially a look-up table — that processed language. We don’t have one. Maybe we never will. Maybe in order to process human language at the same level of sophistication as a human, the “system” must be intelligent, must actually understand what it’s talking about.
This brings us to the flip argument to Searle’s thought expeirment: Turing’s. Turing proposed to test the intelligence of computers this way: once a computer can compete effectively in parlor games, it’s reasonable to assume it’s as intelligent as a human. The parlor game in question isn’t important: what’s important is the flexibility it required. Modern versions of the Turing Test focus on the computer being able to carry on a normal human conversation — essentially, to do what the Chinese Room would be required to do. The Turing assumption is that the simplest possible method of producing human-like language requires cognitive machinery on par with a human.
If anybody wants to watch a dramatization of these arguments, I suggest the current re-imagining of Battlestar Galactica. The story follows a war between humans and intelligent robots. The robots clearly demonstrate emotions, intelligence, pain and suffering, but the humans are largely unwilling to believe any of it is real. “You have software, not feelings,” is the usual refrain. Some of the humans begin to realize that the robots are just as “real” to them as the other humans. The truth is that our only evidence that other humans really have feelings, emotions, consciousness, etc., is through their behavior.
Since we don’t yet have a mathematical proof one way or another, I’ll have to leave it at that. In the meantime, having spent a lot of time struggling with languages myself, the Turing view seems much more plausible than Searle’s.
Comments are closed.