Using Aggregate Scores 2007

Skip to Main Content

Educators, the media and others:

Should use aggregate scores in conjunction with other factors such as the number of courses taken in academic subjects, scores on other standardized tests, pupil/teacher ratios, teacher credentials, expenditures per student, participation rates, retention/attrition rates, graduation rates and other outcomes measures for:

  • Evaluation of the general direction in which education in a particular jurisdiction is headed
  • Curriculum development
  • Faculty staffing
  • Student recruitment
  • Planning for physical facilities
  • Student services such as guidance and placement
  • Monitoring teacher development and curricular effectiveness over time

Should not rank or rate teachers, educational institutions, districts, or states solely on aggregate scores derived from tests that are intended primarily as a measure of individual students.


A note on the use of aggregate SAT data

As measures of developed verbal and mathematical abilities important for success in college, SAT scores are useful in making decisions about individual students and assessing their academic preparation. Because of the increasing public interest in educational accountability, aggregate test data continue to be widely publicized and analyzed. Aggregate scores can be considered one indicator of educational quality when used in conjunction with a careful examination of other conditions that affect the educational enterprise.

However, it is important to note that many College Board tests are taken only by particular groups of self-selected students. Therefore, aggregate results of their performance on these tests usually do not reflect the educational attainment of all students in a school, district, or state.

Useful comparisons of students' performance are possible only if all students take the same test. Average SAT scores are not appropriate for state comparisons because the percentage of SAT takers varies widely among states. In some states, a very small percentage of the college-bound seniors take the SAT. Typically, these students have strong academic backgrounds and are applicants to the nation's most selective colleges and scholarship programs. Therefore, it is expected that the SAT verbal and mathematical averages reported for these states will be higher than the national average. In states where a greater proportion of students with a wide range of academic backgrounds take the SAT, and where most colleges in the state require the test for admission, the scores are closer to the national average.

In looking at average SAT scores, the user must understand the context in which the particular test scores were earned. Other factors variously related to performance on the SAT include academic courses studied in high school, family background and education of parents. These factors and others of a less tangible nature could very well have a significant influence on average scores.

(From Guidelines on the Uses of College Board Test Scores and Related Data. Copyright © 2002 by College Entrance Examination Board. All rights reserved.)


A word about comparing states and schools

The SAT is a strong indicator of trends in the college-bound population, but it should never be used alone for such comparisons because demographics and other nonschool factors can have a strong effect on scores. If ranked, schools and states that encourage students to apply to college may be penalized because scores tend to decline with a rise in percentage of test-takers. To illustrate the effect of that percentage, Table 3 lists states in order of participation.


How should colleges and universities use SAT scores in admissions?

SAT scores can make a significant contribution to admissions decisions when colleges, universities and systems of higher education use them properly. To advise these institutions on the proper use of SAT scores, Guidelines on the Uses of College Board Test Scores and Related Data (2002) indicate that the responsible officials and selection committee members at each institution should:

  • Know enough about tests and test data to ensure that their proper uses and limitations are understood and applied
  • Use SAT scores in conjunction with other indicators, such as the secondary school record (grades and courses), interviews, personal statements, writing samples, portfolios, recommendations, etc., in evaluating the applicant's admissibility at a particular institution
  • View admissions test scores as contemporary and approximate indicators rather than as fixed and exact measures of a student's preparation for college-level work
  • Evaluate test results and other information about applicants in the context of their particular background and experience, as well as in the context of the programs they intend to pursue
  • Ensure that small differences in test scores are not the basis for rejecting an otherwise qualified applicant
  • Guard against using minimum test scores unless used in conjunction with other information such as secondary school performance and unless properly validated. An exception to this guideline is that institutions may establish, based on empirical data, specific score levels that reflect desired skill competencies, such as English language proficiency
  • Regularly validate data used in the selection process to ensure their continuing relevance
  • Maintain adequate procedures for protecting the confidentiality of test scores and other admissions data
  • When introducing or revising admissions policies, allow sufficient lead time and provide adequate notice to schools and students, so that they can take the new policies into account when planning school programs and curricular offerings and preparing for admissions tests and other requirements


How prevalent are changes in school SAT Reasoning Test scores?

This table shows that most changes in mean SAT scores are not unusual. Based on schools in which at least 50 college-bound seniors took the SAT, it shows the percentage of schools whose mean scores rose or fell at least 10, 20, 30, 40 and 50 points by the size of their test-taking populations (50-99, 100-299 and 300+ test-takers) and across all schools. Low-volume schools tend to have larger changes. For example, 60 percent of schools with 50-99 test-takers saw their SAT critical reading means rise or fall 10 or more points, well above the 25 percent of schools with 300 or more test-takers.

Percentage of Schools Whose Mean SAT Reasoning Test Scores Rose or Fell in 2006-2007


  Scores rose or fell at least this many points Percent of schools with this much score change, by number of test-takers Percent of all schools with 50+ test-takers with this much score change
50-99 100-299 300+
Critical Reading 10 60% 44% 25% 48%
20 28% 13% 2% 17%
30 11% 4% 1% 6%
40 4% 2% 0% 2%
50 2% 1% 0% 1%
Mathematics 10 61% 46% 32% 50%
20 29% 14% 5% 19%
30 12% 4% 0% 7%
40 5% 1% 0% 2%
50 2% 1% 0% 1%
Writing 10 60% 45% 28% 49%
20 29% 14% 4% 18%
30 11% 4% 1% 6%
40 4% 1% 0% 2%
50 2% 1% 0% 1%