This study examined the validity of the SAT for predicting performance in first-year English and mathematics courses. Results reveal a significant positive relationship between SAT scores and course grades, with slightly higher correlations for mathematics courses compared to English courses. Correlations were estimated by student characteristics (gender, ethnicity, and best language), institutional characteristics (size, selectivity, and control, i.e., private or public), and course content (e.g., calculus, algebra). The findings suggest that performance on the SAT is predictive of performance in specific college courses. Furthermore, stronger relationships were found between test scores and grades when the content of the two were aligned (such as the SAT mathematics section and mathematics course grades, or the SAT writing section and English course grades).
The current study will explore the validity and potential of using the SAT, in conjunction with HSGPA, to arrive at a predicted FYGPA to improve student retention at four-year postsecondary institutions. Specifically, this study examined whether college students who did not perform as expected (observed FYGPA minus predicted FYGPA) were more likely to leave their institution. Results showed that both under- and over-performing students were more likely to leave college as compared to their academically similar peers who performed as expected. Recommendations for institutions to incorporate this information as part of a cost-effective and efficient detection tool to identify students that may be at risk for not completing their degrees and to help improve institutional retention rates are provided.
Academic intensity or academic rigor of students’ high school curriculum is positively related to several college outcomes including the avoidance of remediation and graduation attainment (Adelman, 1999, 2006; Adelman, Daniel, & Berkovits, 2003). However, research on academic rigor has been limited possibly due to the difficulty in obtaining a quantitative measure applicable across schools and districts. This study is an attempt to create an index of academic rigor using self-reported course work data that would assist in providing information on the academic preparation of over one million graduating high school seniors each year The current study uses the SAT® Questionnaire (SAT-Q) that students complete when registering for the SAT exam to construct an academic rigor index (ARI). The SAT-Q asks students detailed questions on English, math, science, social science/history, and foreign/classical language course work completed during high school. The relationship between course participation and first-year GPA (FYGPA) was investigated using approximately 68,000 SAT takers students who fully completed the SAT-Q and attended one of the 110 four-year colleges and universities participating in an SAT validity study. Based on this data, the ARI was constructed on a 0-25 scale equally weighted between each of the five subject areas. Once the ARI was constructed a series of analyses were conducted to assess the relationship between the index and other concurrent measures of high school performance (HSGPA and SAT scores) and between the index and measures of college performance (enrollment, grades, and retention). The results indicated that students who took more rigorous courses in high school attained better grades, achieved higher SAT scores, and were more likely to enroll in college. Moreover, these students were also more likely to matriculate to a four-year college, attain higher college grades, and be retained to their second year.
In the fall of 2009, the College Board conducted a curriculum survey to gather information on the curricula and institutional practices of high schools and colleges in the United States. The primary objective of the survey was to collect data on the knowledge and skills, or topics, taught in high school classrooms and assess the importance of these topics for institutions of higher education. The College Board periodically reviews the state of K–12 and college curricula as a standard part of the SAT test development process (see example in Milewski, Johnsen, Glazer & Kutota, 2005). The results of the curriculum survey will allow the College Board to validate and ensure that the topics measured on the SAT and SAT Subject Tests™ reflect what is being taught in the nation’s high schools and what college professors consider to be required for college success Curriculum surveys were completed by more than 5,000 high school and college teachers in English language arts (ELA) and mathematics. Surveys were also distributed in biology, chemistry, physics, and history, and the results of these surveys will be reported in later reports. Each survey covered the topics assessed on the SAT, SAT Subject Tests, and the College Board Standards for College Success™ and also inquired about various aspects of course curricula, including the use of assessments. This report presents the results of the English and mathematics surveys, with a focus on the alignment between the SAT and high school and college curricula. The report briefly introduces the SAT, discusses the method used to implement the survey, and concludes with a summary of the results of the ELA and mathematics surveys.
In an effort to continuously monitor the validity of the SAT for predicting first-year college grades, the College Board has continued its multiyear effort to recruit four-year colleges and universities (henceforth, “institutions”) to provide data on the cohorts of first-time, first-year students entering in the fall semester beginning with 2006 through 2009. Its goal in doing so is to provide clear evidence for the use of the SAT in college admission. Prior research based on the same data collection effort has demonstrated a strong, linear relationship of the SAT section scores with first-year grade point average (FYGPA) in college across a variety of institutional and student characteristics (Kobrin, Patterson, Shaw, Mattern, & Barbuti, 2008; Mattern, Patterson, Shaw, Kobrin, & Barbuti, 2008; Patterson, Mattern, & Kobrin, 2009; Patterson & Mattern, 2011). This study serves as a replication of prior analyses for the most recent cohort of students: those who graduated from high school in the spring of 2009 and subsequently enrolled in a four-year college in the fall of 2009 The present study examined the extent to which four predictors commonly used in college admission were linearly related to FYGPA; in particular, SAT critical reading (SAT-CR), mathematics (SAT-M), and writing (SAT-W), as well as high school grade point average (HSGPA), were considered. Overall FYGPA correlations were approximately equal for the combination of all three SAT sections and HSGPA (r = .54, for both correlations). Combining these four predictors led to the strongest linear relationship with FYGPA (r = .62), indicating that the SAT added substantially to predictions that relied solely on HSPGA. Among the three SAT sections, SAT-W tended to exhibit the strongest linear relationship with FYGPA (r = .52). In addition, many of these patterns held true across institutional characteristics, such as control (i.e., public or private), size, and selectivity, and across student characteristics, such as gender, racial/ethnic identity, best spoken language, household income, and highest parental education level. Finally, analyses of differential prediction for the student characteristics showed that using the three SAT sections to predict FYGPA tended to result in smaller differential prediction in absolute magnitude than when using HSGPA alone. With the exception of a few student subgroups, the differential prediction of FYGPA was reduced the most when using the combination of SAT sections and HSGPA.
The College Board formed a research consortium with four-year colleges and universities to build a national higher education database with the primary goal of validating the revised SAT®, which consists of three sections: critical reading (SAT-CR), mathematics (SAT-M), and writing (SAT-W), for use in college admission. A study by Mattern and Patterson (2009) examined the relationship between SAT scores and retention to the second year of college. The sample included first-time first-year students entering college in fall 2006, with 106 of the original 110 participating institutions providing data on retention to the second year. Results showed that SAT performance was related to retention, even after controlling for HSGPAThe following year, previously participating as well as new colleges and universities were invited to provide first-year performance data on the first-time first-year students who entered college in the fall of 2007. For the 2007 sample, a total of 72 of the original 110 institutions and 38 new institutions provided data. The 110 institutions in the 2007 sample contained 216,081 students. A replication of the Mattern and Patterson study (2009) was conducted with the new cohort and similar results were found (Mattern & Patterson, 2011)Similarly, previously participating as well as new colleges and universities were invited to provide first-year performance data on the first-time first-year students who began in the fall of 2008. For the 2008 cohort of students, a total of 129 institutions provided data on a total of 246,652 students. Students without SAT scores, self-reported high school grade point average (HSGPA), or retention data were removed from analyses, resulting in a final sample size of 173,963 students. Replicating the analyses of the previous two reports (Mattern & Patterson, 2009; 2011), the tables below are based on the 2008 sample, and the findings are largely the same as those found in the earlier reports Results show that SAT performance is positively related to second-year retention rates, even after controlling for student and institutional characteristics. This was also true within HSGPA bands, showing that SAT scores provide incremental value over high school grades in predicting retention. Furthermore, controlling for SAT performance is seen to reduce and in some cases eliminate the differences in retention rates between student and institutional subgroups that are otherwise observed. "
The College Board formed a research consortium with four-year colleges and universities to build a national higher education database with the primary goal of validating the SAT® for use in college admission. The first sample included first-time, first-year students entering college in fall 2006, with 110 institutions providing students’ first-year course work, grades, and retention to the second year. In addition to examining the predictive validity of the SAT in terms of college grades (Kobrin, Patterson, Shaw, Mattern, & Barbuti, 2008; Mattern, Patterson, Shaw, Kobrin, & Barbuti, 2008), the relationship between SAT performance and retention to the second year was examined (Mattern & Patterson, 2009). The results found that higher SAT scores were associated with higher retention rates
In the following years, participating colleges and universities were invited to provide subsequent performance data for these students in order to track them longitudinally throughout their college career. For the second year, 66 of the original 110 institutions provided data. Mattern and Patterson (2011) examined the relationship between SAT performance and retention to the third year of college. Similar to the results for second-year retention rates, higher SAT scores were associated with higher third-year retention ratesThis study builds on this body of research by examining the relationship between SAT performance and retention to the fourth year of college. The sample consisted of 59 of the original 110 institutions. Complete data (i.e., SAT scores, self-reported high school grade point average (HSGPA), retention to second-, third-, and fourth-year data) were available for 78,640 students. Results show that SAT performance was positively related to fourth-year retention rates. Detailed results are provided below. Refer to the appendix at the end of this report for a list of participating institutions. "
The College Board formed a research consortium with four-year colleges and universities to build a national higher education database with the primary goal of validating the SAT®, which is used in college admission and consists of three sections: critical reading (SAT-CR), mathematics (SAT-M) and writing (SAT-W). This report builds on a body of evidence that confirms that SAT scores are predictive of multiple indicators of college performance (e.g., first-year grade point average [Kobrin, Patterson, Shaw, Mattern, & Barbuti, 2008; Mattern, Patterson, Shaw, Kobrin, & Barbuti, 2008; Patterson, Mattern, & Kobrin, 2009; Patterson & Mattern, 2011]; retention to the second year [Mattern & Patterson, 2009]; second-year grades (Mattern & Patterson, 2011b), retention to the third year [Mattern & Patterson, 2011a]; and third-year grades[Mattern & Patterson, 2011c]) by demonstrating a strong link between SAT scores and grades earned through the fourth year of college This report presents the validity of the SAT for predicting two fourth-year college outcomes: (1) fourth-year cumulative GPA (4th Yr Cum GPA), and (2) fourth-year grade point average (4th Yr GPA). Grade point average (GPA) for a given year is defined as the average of course grades earned just in that year. Cumulative grade point average (Cum GPA) for a given year is defined as the average of course grades earned at any time from the first year through the year in question. Thus 4th Yr GPA is the average of course grades in just the fourth year, while 4th Yr Cum GPA is the average of course grades in the first through fourth years. Similar to the results for first-, second-, and third-year outcomes, the study found that the SAT is strongly correlated with 4th Yr Cum GPA and 4th Yr GPA for the total sample. The correlations remain strong even when controlling for institutional characteristics (control, selectivity, size) and student characteristics (gender, race/ethnicity, best language, household income, highest parental education). Results are based on nearly 60,000 students across 55 institutions. "
Presented at the 23rd Annual HBCU Conference in Atlanta, GA in September 2011. ACES is the College Board’s free online service that predicts how admitted students will perform at a college or university generally, and how successful students will be in specific courses. ACES studies provide the information needed to confirm or improve current admission and placement policies and can identify the optimum combination of measures to predict a student’s future performance at your institution. Each study presents an analysis of findings and gives you general background information to help you examine the study in greater detail. This session provided an overview of ACES, including how to design useful admission and placement validity studies for your institution, advice on submitting the corresponding data file(s), and how to interpret the text, graphs, and key findings from ACES admission and placement validity reports.
The current study was part of an ongoing effort at the College Board to establish college readiness benchmarks on the SAT®, PSAT/NMSQT®, and ReadiStep™ as well as to provide schools, districts, and states with a view of their students’ college readiness. College readiness benchmarks were established based on SAT performance, using a sample of approximately 68,000 students across 110 four-year institutions. The college readiness benchmark was calculated as the SAT score associated with a 65 percent probability of earning a first-year GPA of 2.67 (B-) or higher. The SAT benchmark determined in this study was 1550 for the composite1. Individual benchmark scores were also calculated for the critical reading, mathematics, and writing sections to provide indicators of student proficiency in each of these subjects, resulting in a benchmark score of 500 on each section. Once the benchmark scores were obtained, a series of analyses were conducted to establish the validity of the benchmarks for indicating college readiness. These analyses examined the relationship between college readiness benchmark attainment and high school academic performance measures (curriculum, HSGPA, and AP performance), along with college indicators including enrollment, FYGPA, and retention. The results showed that students meeting the benchmark are more likely to enroll in college; return for their second and third years of college; earn higher grades in both high school and college; and are more likely to have taken a core curriculum as well as more rigorous courses in high school than those not meeting the benchmark. "