Examines the history of IQ tests, the Flynn effect, and the ethical implications of standardized testing.
If you were told your entire career path was decided by a single 60-minute test you took at age ten, would you find it fair—or would you question the test itself?
In the early 20th century, Lewis Terman of Stanford University adapted Alfred Binet's work to create the Stanford-Binet Intelligence Scale. This test introduced the concept of the Intelligence Quotient (IQ). The original calculation was based on the relationship between a child's Mental Age (MA)—the chronological age that most typically corresponds to a given level of performance—and their actual Chronological Age (CA). While modern tests use 'deviation IQ' based on age-group norms, understanding the original ratio is fundamental to the history of psychometrics. This formula allowed psychologists to quantify intelligence as a single, comparable number for the first time.
To find the IQ of a child using the original Stanford-Binet method, use the formula:
Quick Check
If an 8-year-old child has a mental age of 10, what would their IQ score be?
Answer
125
Psychologists categorize mental assessments into two primary types. Aptitude tests are designed to predict a person's future performance or capacity to learn (e.g., the SAT). Achievement tests are designed to assess what a person has already learned (e.g., a unit exam in Psychology). While the distinction seems clear, they often overlap. A high score on an aptitude test often requires previous achievement in reading and logic. This overlap raises ethical questions: are we measuring innate potential, or simply the quality of the education the student has received?
Consider these two scenarios: 1. A student takes a 'Language Placement Exam' to see which level of Spanish they should enter. This is an Achievement Test because it measures current knowledge. 2. A company gives a 'Logical Reasoning Test' to job applicants to see how well they might handle complex coding tasks in the future. This is an Aptitude Test because it predicts future potential.
Quick Check
Is a final exam in a math class an aptitude test or an achievement test?
Answer
Achievement test
The Flynn Effect refers to the observed rise in average IQ scores worldwide over the last century. Because scores increase by roughly 3 points per decade, tests must be periodically 're-normed' to keep the average at 100. This phenomenon suggests that environmental factors—like better nutrition, increased schooling, and more complex environments—play a massive role in intelligence. Furthermore, critics argue that many IQ tests contain cultural bias, favoring those from specific socioeconomic or Western backgrounds. For example, a question about 'symphonies' assumes cultural exposure rather than raw cognitive ability.
Imagine a person took an IQ test in 1950 and scored a 100. If that same person took a modern version of the test today without any change in their actual cognitive ability, they would likely score significantly lower (around 75-80).
1. This is because the 'average' (100) today represents a much higher level of raw performance than it did in 1950. 2. This forces psychologists to ask: Are we actually getting smarter, or are we just getting better at taking tests?
If a 20-year-old man has the mental age of a 20-year-old, what is his IQ?
Which of the following is the best example of an aptitude test?
The Flynn Effect suggests that human intelligence is strictly genetic and unchanging over generations.
Review Tomorrow
In 24 hours, try to recall the difference between 'Mental Age' and 'Chronological Age' and write down the IQ formula from memory.
Practice Activity
Find a sample question from a 'Culture-Fair' intelligence test (like Raven's Progressive Matrices) and compare it to a standard vocabulary-based IQ question.