Intelligence Is Impossible to Define or Measure?Essay title: Intelligence Is Impossible to Define or Measure?In this essay, I will attempt to trace the development of the concept of intelligence and the various different ways of measuring it. I will discuss, starting from the early twentieth century, how intelligence first became of so much importance and of how the knowledge and understanding of the concept of intelligence has increased throughout the century. I will briefly describe the origins of the concept of intelligence and I will also mention the most recent developments in the subject such as those of multiple intelligences and artificial intelligence (AI).
Various definitions of intelligence have been produced and psychologists have so far been unable to agree on a common definition. This indicates the complexity of the subject and the diverse ways of looking at it. Some of the definitions used during the twentieth century will be mentioned.
The continuous controversy as to whether intelligence is influenced mainly by hereditary or environmental causes will also be discussed.As concepts such as intelligence are always value laden, the political and ideological consequences of intelligence testing will also be briefly explored in relation to educational and racial issues.
The issue of intelligence and intelligence testing first came about within the problems of the education system. Teachers found that some children made slower progress in their studies, and explained this in terms of deficient capacity. The school administrators found this explanation too simple. They believed poor academic performance could have been due to insufficient teaching. It became of some importance as to whether the cause lay on the individual or in the instruction. According to C.J. Adcock (1965, p.181) “It was this educational problem which led to the first effective tests of intelligence”.
In 1904 in France A. Binet and Th. Simon were asked to create an intelligence test that looked at the problems associated with children who could not learn. Their objective was to devise a way of assessing intellectual performance. A test was constructed to assess the performance of children that were not doing so well as the rest of their peers. Binet and Simon were the first to devise such an intelligence test in order to assess a childs ability and to detect the defective children. This was to provide an answer to teachers who complained about some children who were considered to be ineducable. A childs mental age was determined by their level of error – for example lower performance on tests as compared with the age-related norm.
Lorenau-Levenson[13]
In the early 20th century, L. Laughlin and E. G. Johnson were researching a “psychological test” developed by a neuroscientist named Kenneth Laughlin[14]. The Laughlin-Johnson test was designed in a laboratory and consisted of a series of questions that asked how well people understood and communicated to others.[15][16]
Laughlin and Johnson had found that an individual’s ability to communicate through their “mind & body” level of accuracy and awareness was significantly affected by their brain’s function as demonstrated by a large test, which also took into account the number of years they had spent at the time of their birth, family health history, and various other factors. Their results were reported in [17] as they were about 9 years old.[18] Laughlin and Johnson also found that the more years each person spent on any of the questions, the more their knowledge of the subject, and understanding of the subject’s personality and abilities would be improved (i.e., their total proficiency —).[19] The results would improve if an individual had a high IQ, but not as it had been for those individuals who were well-educated or intellectually-dysfunctional.[20][21] They found that this increase in knowledge would be accompanied by decreased brain function if the personality of these people were normal.[13] This improvement in brain performance would occur even if they had a negative self-concept, which would not only contribute negatively to their ability to recognize information from others but could hinder their ability to cope with emotional, social and daily life pressures.[22]
Laughlin and Johnson first published their study in The Journal of Biological Psychology in 1988.[23][24] The study was published in Psychological Science in 1990. The basic premise of Laughlin-Johnson’s study was that while a person is able to remember simple, relevant information, as she or he would if someone was intelligent, an individual’s ability to remember is affected by brain function such as the number of years spent in school and if they had a positive self-concept after years of work. As such, it was hypothesized that a person’s ability to recognize intelligent information was affected because of the more information on such information that can be encoded in the brain to encode its meaning. This was not only possible based on the fact that an individual could recognize information by associating it with their thoughts and activities, but also based on the fact that people with higher mental age tend tend to focus their attention on abstract information. Therefore, the more information that can be encoded in the brain than on a general level, the more accurately it would be interpreted, and thus, an individual is more likely to correctly answer one simple question.[24]
In their study, the subjects were then divided into four groups: members of a general group that were exposed to standard American school curricula, those that were
Garlton also produced a number of tests in connection with his interest of human heredity and is regarded one of the most important pioneers in the development of mental testing. James Mckeen Caltell also was the author of several tests; he was the first to actually use the term mental test. However it was the work of Binet and Simon that helped the development of mental testing get underway. Binets approach was the asking of a variety of questions, which could be answered at different stages of development. From this it was possible to get a measure of someones mental age. However, if one wished to compare the intelligence of children of different ages, this proved problematic. It was argued that there would have to be something more than just mental age. For instance, the older child might have a greater mental age but be less intelligent in that his/her mental age was less than average, whilst the younger one might have a lower mental age than the older child yet have greater than the average of its age group.
Therefore given the need to compare children of different ages, Stern, a German psychologist suggested the Intelligence Quotient, known as I.Q, which is the mental age divided by the chronological age, multiplied by one-hundred. If the two ages are the same then the I.Q is one hundred and the child is deemed to be of average intelligence. If, however, the mental age is higher, then the I.Q exceeds one – hundred.
However, nowadays for any good intelligence test three characteristics must be involved: 1) the test must be reliable, in other words it must consistently give similar results. This can be shown by the test re-test method. 2) It must be valid; this means that it measures what it claims to measure. Here it may be pointed out that there are three different ways of assessing validity: content validity, empirical validity and construct validity. 3) It must be standardised; this means that it must be representative of the population in question, so the individual scores can be compared against the standardised scores. A normal distribution is normally produced from the large sample of the population tested and individual subjects are located within