Interest in intelligence dates back thousands of years, but it wasn’t until psychologist Alfred Binet was commissioned to identify students who needed educational assistance that the first IQ test was born.
Alfred Binet and the First IQ Test
During the early 1900s, the French government asked psychologist Alfred Binet to help decide which students were mostly likely to experience difficulty in schools. The government had passed laws requiring that all French children attend school, so it was important to find a way to identify children who would need specialized assistance.
Faced with this task, Binet and his colleague Theodore Simon began developing a number of questions that focused on things that had not been taught in school such as attention, memory and problem-solving skills. Using these questions, Binet determined which ones served as the best predictors of school success. He quickly realized that some children were able to answer more advanced questions that older children were generally able to answer, while other children of the same age were only able to answer questions that younger children could typically answer. Based on this observation, Binet suggested the concept of a mental age, or a measure of intelligence based on the average abilities of children of a certain age group.
This first intelligence test, referred to today as the Binet-Simon Scale, became the basis for the intelligence tests still in use today. However, Binet himself did not believe that his psychometric instruments could be used to measure a single, permanent and inborn level of intelligence. Binet stressed the limitations of the test, suggesting that intelligence is far too broad a concept to quantify with a single number. Instead, he insisted that intelligence is influenced by a number of factors, changes over time and can only be compared among children with similar backgrounds.
The Stanford-Binet Intelligence Test
After the development of the Binet-Simon Scale, the test was soon brought to the United States where it generated considerable interest. Stanford University psychologist Lewis Terman took Binet’s original test and standardized it using a sample of American participants. This adapted test, first published in 1916, was called the Stanford-Binet Intelligence Scale and soon became the standard intelligence test used in the U.S.
The Stanford-Binet intelligence test used a single number, known as the intelligence quotient (or IQ), to represent an individual’s score on the test. This score was calculated by dividing the test taker’s mental age by their chronological age, and then multiplying this number by 100. For example, a child with a mental age of 12 and a chronological age of 10 would have an IQ of 120 (12 /10?100).
The Stanford-Binet remains a popular assessment tool today, despite going through a number of revisions over the years since its inception.
Intelligence Testing During World War I
At the outset of World War I, U.S. Army officials were faced with the monumental task of screening an enormous number of army recruits. In 1917, as president of the APA and chair of the Committee on the Psychological Examination of Recruits, psychologist Robert Yerkes developed two tests known as the Army Alpha and Beta tests. The Army Alpha was designed as a written test, while the Army Beta was administered orally in cases where recruits were unable to read. The tests were administered to over two million soldiers in an effort to help the army determine which men were well suited to specific positions and leadership roles.
At the end of WWI, the tests remained in use in a wide variety of situations outside of the military with individuals of all ages, backgrounds and nationalities. For example, IQ tests were used to screen new immigrants as they entered the United States at Ellis Island. The results of these mental tests were inappropriately used to make sweeping and inaccurate generalizations about entire populations, which led some intelligence “experts” to exhort Congress to enact immigration restrictions.
The Wechsler Intelligence Scales
The next development in the history of intelligence testing was the creation of a new measurement instrument by American psychologist David Wechsler. Much like Binet, Wechsler believed that intelligence involved a number of different mental abilities, describing intelligence as, “the global capacity of a person to act purposefully, to think rationally, and to deal effectively with his environment” (1939). Dissatisfied with the limitations of the Stanford-Binet, he published his new intelligence test known as the Wechsler Adult Intelligence Scale (WAIS) in 1955.
Wechsler also developed two different tests specifically for use with children: the Wechsler Intelligence Scale for Children (WISC) and the Wechsler Preschool and Primary Scale of Intelligence (WPPSI). The adult version of the test has been revised since its original publication and is now known as the WAIS-III.
The WAIS-III contains 14 subtests on two scales and provides three scores: a composite IQ score, a verbal IQ score and a performance IQ score. Subtest scores on the WAIS-III can be useful in identifying learning disabilities, such as cases where a low score on some areas combined with a high score in other areas may indicate that the individual has a specific learning difficulty.
Rather than score the test based on chronological age and mental age, as was the case with the original Stanford-Binet, the WAIS is scored by comparing the test taker’s score to the scores of others in the same age group. The average score is fixed at 100, with two-thirds of scores lying in the normal range between 85 and 115. This scoring method has become the standard technique in intelligence testing and is also used in the modern revision of the Stanford-Binet test.
What Is a Genius IQ Score?
When people talk about intelligence tests, they often discuss “genius scores”. What exactly constitutes a genius score on a measure of intelligence? In order to understand the score, it is important to first learn a little bit more about IQ testing in general.
Today’s intelligence tests are based largely on the original test devised in the early 1900’s by French psychologist Alfred Binet. In order to identify students in need of extra assistance in school, the French government asked Binet to devise a test that could be used to discover which students most needed academic help.
Based on his research, Binet developed the concept of mental age. Certain questions he posed were easily answered by children of certain age groups. Some children were able to answer questions that were typically answered by children of an older age – these children had a higher mental age than their actual chronological age. Binet’s measure of intelligence was based on the average abilities of children of a particular age group.
Understanding IQ Scores
IQ scores generally follow what is known as the Bell Curve. In order to understand what the score on an IQ test means, there are a few key terms that you should know:
- Bell Curve: When IQ scores are plotted on a graph, they typically follow a bell-shaped curve. The peak of the “bell” occurs where the majority of the scores lie. The bell then slopes downward to each side – one side representing scores that are lower than the average, the other side representing scores that are above the average. An example of a bell curve can be seen in the image above.
- Mean: The average score. The average is calculated by adding all of the scores together, then dividing by the total number of scores.
- Standard Deviation: A measure of variability in a population. A low standard deviation means that most of the data points are very close to the same value. A high standard deviation indicates that the data points tend to be very spread out from the average. In IQ testing, the standard deviation is plus or minus 15.
A Breakdown of IQ Scores
Now that you understand these key terms, we can talk a bit more about how we interpret IQ scores. The average score on an IQ test is 100. Sixty-eight percent of IQ scores fall within one standard deviation of the mean. So that means that the majority of people have an IQ score between 85 and 115.
- 1 to 24 – Profound mental disability
- 25 to 39 – Severe mental disability
- 40 to 54 – Moderate mental disability
- 55 to 69 – Mild mental disability
- 70 to 84 – Borderline mental disability
- 85 to 114 – Average intelligence
- 115 to 129 – Above average; bright
- 130 to 144 – Moderately gifted
- 145 to 159 – Highly gifted
- 160 to 179 – Exceptionally gifted
- 180 and up – Profoundly gifted
Genius IQ Scores
So what is considered a genius IQ score? Generally, any score over 140 is counted as a high IQ. A score over 160 is considered by many to be a genius IQ score. Scores that are 200 and over are often referred to as “unmeasurable genius”.