Measurement

Skip to Main Content

Rating Quality Studies Using Rasch Measurement Theory

Publication Information
Date: 
2013-08-08
George Engelhard, Jr.
Stefanie A. Wind
Publication Thumbnail: 
Abstract: 
The major purpose of this study is to examine the quality of ratings assigned to CR (constructed-response) questions in large-scale assessments from the perspective of Rasch Measurement Theory. Rasch Measurement Theory provides a framework for the examination of rating scale category structure that can yield useful information for interpreting the meaning of ratings assigned in large-scale performance assessment contexts. This study uses data collected as a part of the reader reliability studies (Miao & Odumade, 2011) based on the 2010 administration of the AP® Statistics Exam (N = 238 students, N = 156 raters). The following research questions are addressed: (1) Do the raters on the AP Statistics Exam vary in severity? (2) Do the CR questions on the AP Statistics Exam vary in difficulty? (3) Is the structure of the rating scale comparable across the CR questions?
Publication Type: 
Publication Topic: 

Validating Performance Level Descriptors (PLDs) for the AP Environmental Science Exam (ITC Conference)

Publication Information
Date: 
2012-07-05
Rosemary Reshetar
Pamela Kaliski
Michael Chajewski
Karen Lionberger
Barbara S. Plake
Abstract: 

This presentation summarizes a pilot study conducted after the May 2011 administration of the AP Environmental Science Exam. The study used analytical methods based on scaled anchoring as input to a Performance Level Descriptor validation process that solicited systematic input from subject matter experts.

Publication Topic: 
Publication Type: 

AP Biology Exam: Workshop on Developing Assessments to Meet the Goals of the 2012 Framework for K-12 Science Education

Publication Information
Date: 
2012-09-13
Rosemary Reshetar
Abstract: 

 

On 9/13/12, the Workshop on Developing Assessments to Meet the Goals of the 2012 Framework for K-12 Science Education was held at the National Academies of Science. The workshop was organized and led by the NRC Committee on Developing Assessments of Science Proficiency in K-12 (co-chaired by James Pellegrino and Mark Wilson) and targeted to state assessment leaders.

Rosemary represented the College Board as one of three panelists in a large-scale assessment session focused on assessments that assess science practices in conjunction with core ideas and cross cutting concepts, similar to those depicted in the Framework.  Focus was on how these strategies can be used to measure learning as envisioned in the Framework.

The revised AP Biology Course and Exam which went into effect this fall were presented as an example of how College Board utilized the Evidence Centered Design process for the course and exam review. Of particular interest was the integration of content knowledge with science practices into the learning objectives of the course, and ultimately into the exam to be offered in May. 

Publication Topic: 
Publication Type: 

An Exploration of L1, L2, and Bilingual Students' Writing Features on the SAT Essay (NCME 2012)

Publication Information
Date: 
2012-04-14
Jennifer L. Kobrin
Abstract: 

It is well-documented that students’ prior knowledge, cultural background, and language proficiency play a role in how they read, interpret, and respond to writing tasks (Barkaoui, 2007; Connor & Kramer, 1995; Hinkel, 2002).  Essays written by students from different language backgrounds often differ in their linguistic, stylistic, and rhetorical characteristics and these features may affect the scores that students receive.  This study explored the features of essays written on the SAT by students for whom English was not their best language (L2 students) and bilingual students who reported both English and another language as their best language, compared to students for whom English was solely their best language (L1 students).  A sample of essay responses on 14 different prompts administered between October 2005 and January 2006 were coded on a variety of features including number of words, use of first-person voice, use of a five-paragraph structure, and types of examples offered (e.g., scholarly or personal experience).  Multilevel logistic regression analyses indicated that L2 students had greater odds of using first-person and using personal experience in their essay responses; and had lesser odds of taking a mixed argument approach in responding to the essay prompt.  There was substantial variability in the frequency of essay features by language group across prompt, suggesting that different prompts elicit responses with different features.  The implications for test development and student instruction are discussed.

Publication Type: 

Participation and Performance for Native American Students in the 2007 SAT Cohort (2008 NASAI)

Publication Information
Date: 
2008-05-20
John M. Lee
Abstract: 

Presented at the Native American Student Advocacy Institute (NASAI) in Tsaile, AZ in May 2008. This presentation explores the participation and performance of Native American students in the SAT program and attempt to find a way to find how to better support and encourage students to ensure postsecondary access and excellence.

Publication Topic: 
Publication Type: 

The SAT as a State's High School NCLB Assessment: Rationale and Issues Confronted (2008 CCSSO)

Publication Information
Date: 
2008-06-18
Dan Hupp
Deanna L. Morgan
Tim Crockett
Abstract: 

Presented at the National Conference On Student Assessment in Orlando, hosted by the CCSSO (Council of Chief State School Officers), June 2008. This presentation explores use of the SAT Reasoning Test as the No Child Left Behind (NCLB) assessment test in the state of Maine.

Publication Topic: 
Publication Type: 

The Validity of the SAT for Predicting Cumulative Grade Point Average by College Major

Publication Information
Date: 
2012-08-10
Jennifer L. Kobrin
Brian F. Patterson
Krista D. Mattern
Publication Thumbnail: 
Abstract: 

The current study examined the differential validity of the SAT for predicting cumulative GPA (cGPA) through the second year of college by college major, as well as the differential prediction of cGPA by college major across student subgroups. The relationship between the SAT and cGPA varied somewhat by major, as well as by major and subgroup (e.g. gender, ethnicity,and parental education level). This variability was likely due to differences in the  nature of the college course work,grading practices, student self-selection, and academic cultures (e.g., male dominated or highly competitive) across majors. The findings from this  study may be particularly relevant to colleges and universities in examining different admission  criteria for acceptance to specialized colleges and major programs within an institution, and thus it could serve as a comprehensive resource for higher education researchers examining college major and performance.

Publication Type: 

Examining the Accuracy of Self-Reported High School Grade Point Average (AERA 2010 presentation)

Publication Information
Date: 
2010-04-29
Krista D. Mattern
Abstract: 

Presented at AERA in Denver, CO in April 2010. This study examined the relationship between students' self-reported high school grade point average (HSGPA) from the SAT Questionnaire and their HSGPA provided by the colleges and universities they attend. The purpose of this research was to offer updated information on the relatedness of self-reported (by the student) and school-reported (by the college/university from the high school transcript) HSGPA, compare these results to prior studies and provide recommendations on the use of self-reported HSGPA. Results from this study indicated that even though the correlation between the self-reported and school-reported HSGPA is slightly lower than in prior studies (r = 0.74), there is still a very strong relationship between the two measures.

Publication Type: 

The National SAT Validity Study: Sharing Results from Recent College Success Research (SACAC 2010 presentation)

Publication Information
Date: 
2010-04-29
Elizabeth McKenzie
Abstract: 

Presented at the annual conference of the Southern Association for College Admission Counseling, April 2010.This presentation summarizes recent research from the national SAT Validity Study and includes information on the Admitted Class Evaluation Service (ACES) system and how ACES can help institutions conduct their own validity research. 

Publication Type: 

A Case for Not Going SAT-Optional: Students with Discrepant SAT and HSGPA Performance (AERA 2010 presentation)

Publication Information
Date: 
2010-04-29
Krista D. Mattern
Jennifer L. Kobrin
Abstract: 

Presented at the national conference for the American Educational Research Association (AERA) in 2010. This presentation describes an alternative way of presenting the unique information provided by the SAT over HSGPA, namely examining students with discrepant SAT-HSGPA performance.

Publication Type: 

Pages

Subscribe to RSS - Measurement