History of Computing
Essay Preview: History of Computing
Report this essay
General principles
Etymology (Where the word is from)
The exponential progress of computer development
Classification of computers
Classification by intended use
Classification by implementation technology
Classification by design features
4.3.1
Digital versus analog
4.3.2
Binary versus decimal
4.3.3
Programmability
4.3.4
Storage
Classification by capability
4.4.1
General-purpose computers
4.4.2
Special-purpose computers
4.4.3
Single-purpose computers
Classification by type of operation
Computer applications
The Internet
How computers work
Instructions
Memory
Processing (Processor)
Control (Control Unit)
Input and output
Architecture
Programs
6.7.1
Operating system
Sources:
A computer is a device or machine for making calculations or controlling operations that are expressible in numerical or logical terms. Computers are made from components that perform simple well-defined functions. The complex interactions of these components give computers the ability to process information. If correctly configured, a computer can be made to represent some aspect of a problem or part of a system. If a computer is configured in this way is given input data, then it can automatically solve the problem or predict the behavior of the system.
General principles
Computers can work through the movement of mechanical parts, electrons, photons, quantum particles or any other well-understood physical phenomenon. Although computers have been built out of many different technologies, nearly all popular types of computers have electronic components.
Computers may directly model the problem being solved, in the sense that the problem being solved is mapped as closely as possible onto the physical phenomena being exploited. For example, electron flows might be used to model the flow of water in a dam. Such analog computers were once common in the 1960s but are now rare. They are practically dead.
In most computers today, the problem is first translated into mathematical terms by rendering all relevant information into the binary base-two numeral system. Next, every operation on that information is reduced to simple Boolean algebra.
Electroniccircuits are then used to represent Boolean operations. Since almost all of mathematics can be reduced to Boolean operations, a sufficiently fast electronic computer is capable of sloving almost any mathematical problems (and the majority of information processing problems that can be translated into mathematical ones). This basic idea, which made modern digital computers possible, was formally identified and explored by Claude E. Shannon.
Computers cannot solve all mathematical problems. Alan Turing identified which problems could and could not be solved by computers, and in doing so founded theoretical computer science.
When the computer is finished calculating the problem, the result must be displayed to the user as output through output devices like light bulbs, LEDs, monitors, beamers and printers.
Novice users, especially children, often have difficulty understanding the important idea that the computer is only a machine, and cannot “think” or “understand” the words it displays. The computer is simply performing a mechanical lookup on preprogrammed tables of lines and colors, which are then translated into arbitrary patterns of light by the output device. It is the human brain which recognizes that those patterns form letters and numbers, and attaches meaning to them. All that existing computers do is manipulate electrons that are logically equivalent to ones and zeroes; there are no known ways to successfully emulate human comprehension or self-awareness.
Etymology (Where the word is from)
The word was originally used to describe a person who performed the arts and this usage is still valid. The OED2 lists the year 1897 as the first year, where the word was used to refer to a mechanical calculating device. By 1946 several qualifiers were introduced by the OED2 to differentiate between the different types of machine. These qualifiers included analogue, digital and electronic. However, from the context of the citation, it is obvious these terms were in use prior to 1946.
The exponential progress of computer development
Computing devices have doubled in capacity every 18 to 24 months since 1900. Gordon E. Moore, co-founder of Intel, first described this property of computer development in 1965. His observation has become known as Moores Law, although it of course is not actually a law, but rather a significant trend. Hand-in-hand with this increase in capacity per unit cost has been an equally dramatic process of miniaturization. The first electronic computers, such as the ENIAC (announced in 1946), were huge devices that weighed tons, occupied entire rooms, and required many operators to function successfully. These computers worked only for a few hours without errors. They were so expensive that only governments and large research organizations could buy and use them and were considered so exotic that only a handful would ever be required to satisfy global demand. By contrast, modern computers are more powerful, less expensive, smaller and have become available in many areas. The exponential progress of computer development makes classification of computers problematic since modern computers are much more powerful than earlier devices.
Classification of computers
The following sections describe different approaches to classifying computers.
Classification