Artificial Intelligence
Artificial Intelligence
Of the many technological advances of the modern era, none is as intricate as artificial intelligence. The notion that a non-human, synthetic being could advance to a point of imitating human behavior is enough to make people quiver. Artificial Intelligence has been in existence for four decades and it appears to be developing very well. From the establishment of AI till today we have been capable of keeping up with technology that has assisted us with almost all of our daily activity. If we were to lose our AI capabilities we would undergo an intense impact on our way of living.
Artificial intelligence is defined as the capability of a machine to demonstrate cognitive thought and reasoning. Scientists and theorists continue to deliberate if computers will essentially be able to reason for themselves. (Patterson 7) The commonly accepted theory is that computers will prevail more in the future. AI has developed swiftly in the last ten years essentially because of the developments in computer design. Artificial intelligence was really invented in 1956 by an assembly of scientists having their first seminar on the topic (Patterson 6). First attempts at AI were neural networks demonstrated after the networks in the human brain. Achievement was minimal at best because of the deficiency of computer technology desired to calculate such big equations.
Artificial Intelligence is defined as a branch of computer science dealing with the simulation of intelligent behavior in computers. (Merriam-Webster) Artificial Intelligence is a division of computer science that develops programs to allow machines to perform functions normally requiring human intelligence. It is the study and design of intelligent agents; which are systems that distinguish their environments and take actions to maximize their chance of success. Artificial intelligence started as a field whose objective was to imitate human level intellect in a machine. Initial hopes weakened as the extent and difficulty of that goal was esteemed. Measured progress was made over the following 25 years in demonstrating remote facets of intelligence. Latest work has inclined to concentrate on commercial aspects of “intelligent assistants” for human labors.
During the course of human history, humans have used technology to perfect themselves. There is confirmation of this from ancient Egypt, Greece, and China. Every innovative technology has been subjugated to construct intelligent agents or models of the brain. Telephone switching systems, digital computers and, analog computers have all been projected both as technological representations for intellect and as apparatuses for demonstrating the human mind.
Roughly four hundred years ago humans started to write about the nature of thought and reason. Hobbes, who has been described by Haugeland (1985) as the “Grandfather of AI,” adopted the position that “thinking was symbolic reasoning like talking out loud or working out an answer with pen and paper”.
The indication of figurative operations became more concrete with the advancement of computers. In the primary portion of the twentieth century, there was a considerable amount of labor done on understanding computation. Numerous replicas were planned, including the Turing machine created by Alan Turing a theoretic machine that writes symbols on a substantially long tape. It was shown that different formalisms are equal in that any function calculable by one is calculable by the others.
When computers were constructed, some of the principal applications of computers were AI programs. In 1952, Samuel constructed a checkers program and executed a program that learns to play checkers later on. Simon and Newell manufactured a program, Logic Theorist that determines proofs in propositional logic. In 1943, McCulloch and Pitts presented how a simple “formal neuron” could be the foundation for a Turing-complete machine. The leading knowledge for these neural networks was designated by Minsky.
In 1956 John McCarthy considered as the father of AI, prepared a conference to draw the talents and knowledge of others involved in machine intelligence for a month of brainstorming. He requested them to Vermont for “The Dartmouth summer research project on artificial intelligence.” Because of him the field would be known as Artificial intelligence. Although not an enormous success it did bring together the founders in AI and laid the foundation for the future. McCarthy later developed a major breakthrough in AI. In 1958 McCarthy proclaimed his new development; the LISP language, which is still used to this very day. LISP stands for LISt Processing, and it was accepted as the language of choice among most AI developers.
Throughout the 1970s and 1980s, there was a vast body of work on expert systems. The purpose was to capture the knowledge of an expert in a domain so a