Essay Preview: Ai
Report this essay
Can a machine display general intelligence?
Is it possible to create a machine that can solve all the problems humans solve using their intelligence? This is the question that AI researchers are most interested in answering. It defines the scope of what machines will be able to do in the future and guides the direction of AI research. It only concerns the behavior of machines and ignores the issues of interest to psychologists, cognitive scientists and philosophers; to answer this question, it doesnt matter whether a machine is really thinking (as a person thinks) or is just acting like it is thinking.[7]
The basic position of most AI researchers is summed up in this statement, which appeared in the proposal for the Dartmouth Conferences of 1956:
Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it.[3]
Arguments against the basic premise must show that building a working AI system is impossible, because there is some practical limit to the abilities of computers or that there is some special quality of the human mind that is necessary for thinking and yet cant be duplicated by a machine (or by the methods of current AI research). Arguments in favor of the basic premise must show that such a system is possible.
The first step to answering the question is to clearly define “intelligence.”
Intelligence
Turing test
Main article: Turing test
Alan Turing, in a famous and seminal 1950 paper,[8] reduced the problem of defining intelligence to a simple question about conversation. He suggests that: if a machine can answer any question put to it, using the same words that an ordinary person would, then we may call that machine intelligent. A modern version of his experimental design would use an online chat room, where one of the participants is a real person and one of the participants is a computer program. The program passes the test if no one can tell which of the two participants is human.[2] Turing notes that no one (except philosophers) ever asks the question “can people think?” He writes “instead of arguing continually over this point, it is usual to have a polite convention that everyone thinks.”[9] Turings test extends this polite convention to machines:
If a machine acts as intelligently as human being, then it is as intelligent as a human being.
Human intelligence vs. intelligence in general
One criticism of the Turing test is that it is explicitly anthropomorphic. If our ultimate goal is to create machines that are more intelligent than people, why should we insist that our machines must closely resemble people? Russell and Norvig write that “aeronautical engineering texts do not define the goal of their field as making machines that fly so exactly like pigeons that they can fool other pigeons.”[10] Recent AI research defines intelligence in terms of rational agents or intelligent agents. An “agent” is something which perceives and acts in an environment. A “performance measure” defines what counts as success for the agent.[11]
If an agent acts so as maximize the expected value of a performance measure based on past experience and knowledge then it is intelligent.[12]
Definitions like this one try to capture the essence of intelligence. They have the advantage that, unlike the Turing test, they dont also test for human traits that we may not want to consider intelligent, like the ability to be insulted or the temptation to lie. They have the disadvantage that they fail to make the commonsense differentiation between “things that think” and “things that dont”. By this definition, even a thermostat has a rudimentary intelligence.
Arguments that a machine can display general intelligence
The brain can be simulated
Main article: artificial brain
An MRI scan of a normal adult human brain
Marvin Minsky writes that “if the nervous system obeys the laws of physics and chemistry, which we have every reason to suppose it does, then . we … ought to be able to reproduce the behavior of the nervous system with some physical device.”[13] This argument, first introduced as early as 1943[14] and vividly described by Hans Moravec in 1988,[15] is now associated with futurist Ray Kurzweil, who estimates that computer power will sufficient for a complete brain simulation by the year 2029.[16]
Few disagree that a brain simulation is possible in theory, even critics of AI such as Hubert Dreyfus and John Searle.[17] However, Searle points out that, in principle, anything can be simulated by a computer, and so any process at all can be considered “computation”, if youre willing to stretch the definition to the breaking point. “What we wanted to know is what distinguishes the mind from thermostats and livers,” he writes.[18] Any argument that involves simply copying a brain is an argument that admits that we know nothing about how intelligence works. “If we had to know how the brain worked to do AI, we wouldnt bother with AI.”[19]
Human thinking is symbol processing
Main article: physical symbol system
In 1963, Alan Newell and Herbert Simon proposed that “symbol manipulation” was the essence of both human and machine intelligence. They wrote:
A physical symbol system has the necessary and sufficient means of general intelligent action.[4]
This claim is very strong: it implies both that human thinking is a kind of symbol manipulation (because a symbol system is necessary for intelligence) and that machines can be intelligent (because a symbol system is sufficient for intelligence).[20] Another version of this position was described by philosopher Hubert Dreyfus, who called it “the psychological assumption”:
The mind can be viewed as a device operating on bits of information according to formal rules.[21]
A distinction is usually made between the kind of high level symbols that directly correspond with objects in the world, such as and and the more complex