The New SAT and Trends in Test Performance
This report presents a methodology for the creation of a PSAT/NMSQT test score benchmark to identify students who are on track toward college readiness when completing high school. The proposed benchmark could create useful early indicators of whether students in grades 10 and 11 are on track to be college ready upon high school graduation.
The purpose of this paper is to provide information about how students’ scores change when they retake the PSAT/NMSQT as juniors or take the SAT in the spring after they take the PSAT/NMSQT as juniors. Two research questions guided this study and motivated the approach for analysis of the data: How do scores change for students who took the PSAT/NMSQT both as a sophomore and junior or the PSAT/NMSQT and SAT as juniors? How will future students taking the PSAT/NMSQT be expected to perform when they take the SAT or retake the PSAT/NMSQT?
This is the College Board's response to a research article by Drs. Maria Veronica Santelices and Mark Wilson in the Harvard Educational Review, entitled "Unfair Treatment? The Case of Freedle, the SAT, and the Standardization Approach to Differential Item Functioning"
This study focused on the relationship between students’ performance in AP English Language, Biology, Calculus, and U.S. History, and their subsequent college success. For each AP Exam studied, students were divided into three groups according to their AP Exam performance (no AP Exam taken, score of 1 or 2, and a score of 3 or higher). Subsequent college success was measured by students’ first-year college grade point average (FYGPA), retention to the second year, and institutional selectivity. Results indicated that, even after controlling for students’ SAT scores and high school grade point average as measures of prior academic performance, students with an AP score of 3 or higher outperformed the other two groups. Additionally, students with an AP score of 1 or 2 tended to outperform students with no AP scores except in terms of FYGPA.
In 2005, the College Board added a required writing section to the SAT, and ACT added an optional writing test to the ACT. Before 2005, ACT and the College Board had periodically produced concordance tables to assist admission officers who wanted to understand how students of comparable ability would score on the two college entrance examinations. Given the changes to both tests, the College Board and ACT are now providing updated concordance tables that are based on the current versions of the two tests.
Advanced Placement Program Summer Institute (APSI) courses provide teachers with an overview of the curriculum, structure, and content of specific AP courses. Attention is devoted not only to the development of curriculum but also to teaching strategies and the relationship of the course to the AP Examination. During the summers of 2006 and 2007, 168 institutes were held for new and experienced AP teachers in the state of Florida at nine institutions of higher education; in 2006, 79 were held; and there were 89 held in 2007. In the spring of 2008, evaluation researchers at the College Board developed a survey to solicit feedback on participants' impressions of the APSIs offered in Florida, as well as changes they made on their AP curriculum and exam preparation as a direct result of attending the institute(s).
This study sought to compare the peformance of students in the College Board Advanced Placement Program (AP) compared to non-AP students on a number of college outcome measures. Ten individual AP Exams were examined in this study of students in four entering classes (1998-2001) at the University of Texas at Austin. The study's results support previous research that AP students performed as well if not better than non-AP students on most college outcome measures.
A recent study by Beilock, Reidell, and McConnell (2007) suggested that stereotype threat experienced in one domain (e.g., math) triggered by knowledge of a negative stereotype about a social group in that particular domain can spill over into subsequent tasks in totally unrelated domains (e.g., reading). The authors suggested that these findings might have implications for how the ordering of sections on standardized tests such as the SAT or GRE could affect examinee performance. To test the authors' assertions, this study used data from a recent SAT administration in which either a reading, a math, or a writing task preceded a reading task. Performance on the subsequent reading task of members of a stereotype threatened group (i.e., women) who took the math task first was compared to performance of those who took the reading or writing task first. Results were inconsistent with the stereotype threat spillover hypothesis, and serve to justify the exhortation of Cullen, Hardison, and Sackett (2004) for caution in generalizing lab findings on stereotype threat to operational testing situations.
This report is a review and summary of current information regarding test accommodations currently used in different states and districts for English language learners (ELL). Similarities and differences among states regarding ELL accommodation are documented.