%0 Journal Article %J Applied Psychological Measurement %D 2013 %T The Influence of Item Calibration Error on Variable-Length Computerized Adaptive Testing %A Patton, Jeffrey M. %A Ying Cheng, %A Yuan, Ke-Hai %A Diao, Qi %X

Variable-length computerized adaptive testing (VL-CAT) allows both items and test length to be “tailored” to examinees, thereby achieving the measurement goal (e.g., scoring precision or classification) with as few items as possible. Several popular test termination rules depend on the standard error of the ability estimate, which in turn depends on the item parameter values. However, items are chosen on the basis of their parameter estimates, and capitalization on chance may occur. In this article, the authors investigated the effects of capitalization on chance on test length and classification accuracy in several VL-CAT simulations. The results confirm that capitalization on chance occurs in VL-CAT and has complex effects on test length, ability estimation, and classification accuracy. These results have important implications for the design and implementation of VL-CATs.

%B Applied Psychological Measurement %V 37 %P 24-40 %U http://apm.sagepub.com/content/37/1/24.abstract %R 10.1177/0146621612461727 %0 Journal Article %J Educational and Psychological Measurement %D 2010 %T Improving Cognitive Diagnostic Computerized Adaptive Testing by Balancing Attribute Coverage: The Modified Maximum Global Discrimination Index Method %A Ying Cheng, %X

This article proposes a new item selection method, namely, the modified maximum global discrimination index (MMGDI) method, for cognitive diagnostic computerized adaptive testing (CD-CAT). The new method captures two aspects of the appeal of an item: (a) the amount of contribution it can make toward adequate coverage of every attribute and (b) the amount of contribution it can make toward recovering the latent cognitive profile. A simulation study shows that the new method ensures adequate coverage of every attribute, which improves the validity of the test scores, and defensibility of the proposed uses of the test. Furthermore, compared with the original global discrimination index method, the MMGDI method improves the recovery rate of each attribute and of the entire cognitive profile, especially the latter. Therefore, the new method improves both the validity and reliability of the test scores from a CD-CAT program.

%B Educational and Psychological Measurement %V 70 %P 902-913 %U http://epm.sagepub.com/content/70/6/902.abstract %R 10.1177/0013164410366693 %0 Journal Article %J Educational and Psychological Measurement %D 2009 %T Constraint-Weighted a-Stratification for Computerized Adaptive Testing With Nonstatistical Constraints %A Ying Cheng, %A Chang, Hua-Hua %A Douglas, Jeffrey %A Fanmin Guo, %X

a-stratification is a method that utilizes items with small discrimination (a) parameters early in an exam and those with higher a values when more is learned about the ability parameter. It can achieve much better item usage than the maximum information criterion (MIC). To make a-stratification more practical and more widely applicable, a method for weighting the item selection process in a-stratification as a means of satisfying multiple test constraints is proposed. This method is studied in simulation against an analogous method without stratification as well as a-stratification using descending-rather than ascending-a procedures. In addition, a variation of a-stratification that allows for unbalanced usage of a parameters is included in the study to examine the trade-off between efficiency and exposure control. Finally, MIC and randomized item selection are included as baseline measures. Results indicate that the weighting mechanism successfully addresses the constraints, that stratification helps to a great extent balancing exposure rates, and that the ascending-a design improves measurement precision.

%B Educational and Psychological Measurement %V 69 %P 35-49 %U http://epm.sagepub.com/content/69/1/35.abstract %R 10.1177/0013164408322030 %0 Journal Article %J Applied Psychological Measurement %D 2007 %T Two-Phase Item Selection Procedure for Flexible Content Balancing in CAT %A Ying Cheng, %A Chang, Hua-Hua %A Qing Yi, %X

Content balancing is an important issue in the design and implementation of computerized adaptive testing (CAT). Content-balancing techniques that have been applied in fixed content balancing, where the number of items from each content area is fixed, include constrained CAT (CCAT), the modified multinomial model (MMM), modified constrained CAT (MCCAT), and others. In this article, four methods are proposed to address the flexible content-balancing issue with the a-stratification design, named STR_C. The four methods are MMM+, an extension of MMM; MCCAT+, an extension of MCCAT; the TPM method, a two-phase content-balancing method using MMM in both phases; and the TPF method, a two-phase content-balancing method using MMM in the first phase and MCCAT in the second. Simulation results show that all of the methods work well in content balancing, and TPF performs the best in item exposure control and item pool utilization while maintaining measurement precision.

%B Applied Psychological Measurement %V 31 %P 467-482 %U http://apm.sagepub.com/content/31/6/467.abstract %R 10.1177/0146621606292933