The Chinese Room
Essay Preview: The Chinese Room
Report this essay
The Chinese Room argument, created by John Searle, is an argument against the possibility of artificial intelligence. The argument focuses on a thought experiment in which a man who knows only English is alone in a room using English instructions for manipulating strings of Chinese symbols. Outside it appears as if someone in the room understands Chinese. The argument is meant to show that while properly programmed machines may look like conversing in natural language, they lack the ability to understand language, even in principle. Searle argues that the thought experiment does not take into consideration the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics. Searles argument is a blunt challenge to advocates of Artificial Intelligence, and the theory also has wide assumptions for functionalist and computational theories of meaning. As a result, there have been many critical replies to the argument. My aim is to show you why Searles argument is false. The argument fails to sufficiently deduct the Turing Test as an standard of artificial intelligence.
Alan Turing, the father of modern computer science, inspired many debates on wether computers would ever be able to think for themselves. He created a test for rating the ability of machines to think. The test was simple, if a computer can perform in such a way that an expert interrogator cannot distinguish it from a human, then the computer can be said to think. Searle uses this test as a way of disproving that artificial intelligence will never be achieved. John Searle devised an experiment that, he thought, would pass the Turning Test.
The Chinese Room argument has, presumably, some integrity. Searles introduces this scenario. Put an person who speaks only English in a room with only a instruction book, in English, and a slot from communicating with someone on the outside. Through the slot are passed questions that are in Chinese for which the man must answer.. Chinese agents then asked the man to follow the instruction book to create some kind of response. We assume that the instruction book has codified all the rules needed to speak fluently by mere Chinese symbol manipulation. The man follows the rules perfectly and supplies impeccable Chinese answers to the questions.. The people outside the room believe that the English speaking man also speaks Chinese. The man symbolizes the computer and the set of instructions represents the computer program. According to Searle, although the man in the room would pass the Turing Test, it cannot be said that the man in the room understands Chinese like the people on the outside of the room. Searle concludes that, “just manipulating the symbols is not by itself enough to guarantee cognition, perception, understanding, thinking, and so forth.”
My only criticism to the argument is his usage of the premise that the man doesnt understand the conversation. He cannot accurately assume that the system as a whole cannot understand.. There is no way to show that the system as a whole understands. On the other hand, one cannot prove that the system as a whole does not understand, which is what Searle wants us to believe. Let us consider an analogy with a pilot flying a helicopter. A pilot, according to the laws of nature, cannot fly on his own, but that does not mean to imply that when he is in a helicopter the helicopter is also unable to fly. Most likely, Searle would halt to contest that the helicopter cannot fly, but he would have us believe that the same broken logic is valid when applied to his Chinese Room argument. The pilot is, after all, the symbol manipulator of the cockpit controls, just as the man is the symbol manipulator of the room. This example has nothing to do with the room being able to think, there is not way to prove that statement. The point is that one cannot assume that the room does not understand merely based on the fact that the man inside the room does not understand.
Searle anticipated the System reply when the Chinese Room argument was first introduced. I just dont see how his counter argument makes any sense. He accommodates the design of the Chinese Room so the man does not need to be inside. Instead of the set of instructions serving as a separate entity, he suggests that we can imagine the man following the rules completely from memorization. He believes that the man still