Tuesday, May 10, 2005 | Last March, the College Board unveiled the newly revised SAT, which now features an essay and other additions suggested by the author, the former president of the University of California. He played an integral role in reshaping the national standardized test.

This is part one in a four-part series.

My intent in this article is to offer a personal perspective on the events that led to a major change in the college admissions test, known as the SAT. The new test is now in place for all students – nationwide – who must take the SAT as part of the admissions process for the college class entering in the fall of 2006. Hopefully, this account will be useful to those trying to change policies and practices deeply entrenched in our society.

Before I begin, let me introduce some terminology. By the term “standardized test,” I mean simply a test administered under controlled conditions and carefully monitored to prevent cheating. I will also use the terms “aptitude test” and “achievement test.” Achievement tests are designed to measure mastery of a specific subject. In contrast, aptitude tests are designed to predict an individual’s ability to profit from a particular type of training or instruction. For example, an algebra test given at the end of a course would be classified as an achievement test, whereas a test given prior to the course -designed to predict the student’s performance in the algebra course – would be classified as an aptitude test. In actual practice, the distinction between achievement and aptitude tests is not as neat as these definitions might suggest, but the conceptual difference is useful.

After World War II, colleges and universities in the United States gradually adopted standardized tests as part of their admissions process. The test that was most widely selected was the Scholastic Aptitude Test, known as the SAT. Some schools used the American College Testing Program, but most institutions, particularly the more selective ones, chose the SAT.

The College Board (the nonprofit organization that owns the SAT) has made a series of changes in the test since its inception. The original SAT became the SAT I – a three-hour test that continued to focus on verbal aptitude but added a quantitative section covering mathematical topics typically taught in grades one through eight. In addition, the College Board developed 23 one-hour SAT II tests designed to measure a student’s achievement in specific subjects such as physics, chemistry, history, mathematics, writing and foreign languages. Most colleges and universities required just the SAT I, but some required the SAT I plus two or three SAT II tests.

Today, when the SAT is mentioned in the media, the reference is invariably to the SAT I. The test has become a key factor in determining who is admitted – and who is rejected – at the more selective institutions.

My concerns about the SAT date back to the late 1940s when I was an undergraduate at the University of Chicago. Many of the Chicago faculty were outspoken critics of the SAT and viewed it as nothing more than a multiple-choice version of an IQ test; they argued forcefully for achievement tests in the college admissions process. Their opposition may have been influenced to some degree by school rivalry; the leading force behind the SAT at that time was James B. Conant, the president of Harvard University. Eventually Chicago adopted the SAT, but not without controversy.

In the years after leaving the University of Chicago, I followed the debates about the SAT and IQ tests with great interest. I knew that Carl Brigham, a psychologist at Princeton who created the original SAT, modeled the test after earlier IQ tests and regarded it as a measure of innate mental ability. But years later, he expressed doubts about the validity of the SAT and worried that preparing for the test distorted the educational experience of high school students.

Conant also expressed serious reservations about the test later in his life. When students asked me about IQ testing, I frequently referred them to Stephen Jay Gould’s book “The Mismeasure of Man.” It is a remarkable piece of scholarship that documented the widespread misuse of IQ tests.

I knew both Dick Herrnstein at Harvard and Art Jensen at UC Berkeley personally, and kept track of their controversial work on IQ. And, of course, I was a long-term member of the faculty at Stanford University where the Stanford-Binet Intelligence Scales were developed.

Over the intervening years, my views about IQ testing proved to be mixed. In the hands of a trained clinician, tests like the Wechsler Intelligence Scales or the Stanford-Binet Intelligence Scales are useful instruments in the diagnosis of learning problems; they can often identify someone with potential, who for whatever reason, is failing to live up to that potential. However, such tests do not have the necessary validity or reliability to justify ranking individuals of normal intelligence, let alone to make fine judgments among highly talented individuals.

My views are similar to those of Alfred Binet, the French psychologist who, in the early years of the last century, devised the first IQ tests. Binet was very clear that these tests could be useful in a clinical setting, but rejected the idea that they provided a meaningful measure of mental ability that could be used to rank order individuals. Unfortunately, his perspective was soon forgotten as the IQ testing industry burst onto the American scene.

Read part two.

Richard Atkinson is president emeritus of the University of California. He served as president of the UC system from 1995 to 2003. Prior to that, Atkinson served as chancellor of UC San Diego, was director of the National Science Foundation and was a long-term member of the faculty at Stanford University.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.