TY - JOUR T1 - Optimal Reassembly of Shadow Tests in CAT JF - Applied Psychological Measurement Y1 - 2016 A1 - Choi, Seung W. A1 - Moellering, Karin T. A1 - Li, Jie A1 - van der Linden, Wim J. AB - Even in the age of abundant and fast computing resources, concurrency requirements for large-scale online testing programs still put an uninterrupted delivery of computer-adaptive tests at risk. In this study, to increase the concurrency for operational programs that use the shadow-test approach to adaptive testing, we explored various strategies aiming for reducing the number of reassembled shadow tests without compromising the measurement quality. Strategies requiring fixed intervals between reassemblies, a certain minimal change in the interim ability estimate since the last assembly before triggering a reassembly, and a hybrid of the two strategies yielded substantial reductions in the number of reassemblies without degradation in the measurement accuracy. The strategies effectively prevented unnecessary reassemblies due to adapting to the noise in the early test stages. They also highlighted the practicality of the shadow-test approach by minimizing the computational load involved in its use of mixed-integer programming. VL - 40 UR - http://apm.sagepub.com/content/40/7/469.abstract ER - TY - JOUR T1 - Assessing Individual-Level Impact of Interruptions During Online Testing JF - Journal of Educational Measurement Y1 - 2015 A1 - Sinharay, Sandip A1 - Wan, Ping A1 - Choi, Seung W. A1 - Kim, Dong-In AB - With an increase in the number of online tests, the number of interruptions during testing due to unexpected technical issues seems to be on the rise. For example, interruptions occurred during several recent state tests. When interruptions occur, it is important to determine the extent of their impact on the examinees' scores. Researchers such as Hill and Sinharay et al. examined the impact of interruptions at an aggregate level. However, there is a lack of research on the assessment of impact of interruptions at an individual level. We attempt to fill that void. We suggest four methodological approaches, primarily based on statistical hypothesis testing, linear regression, and item response theory, which can provide evidence on the individual-level impact of interruptions. We perform a realistic simulation study to compare the Type I error rate and power of the suggested approaches. We then apply the approaches to data from the 2013 Indiana Statewide Testing for Educational Progress-Plus (ISTEP+) test that experienced interruptions. VL - 52 UR - http://dx.doi.org/10.1111/jedm.12064 ER - TY - JOUR T1 - Determining the Overall Impact of Interruptions During Online Testing JF - Journal of Educational Measurement Y1 - 2014 A1 - Sinharay, Sandip A1 - Wan, Ping A1 - Whitaker, Mike A1 - Kim, Dong-In A1 - Zhang, Litong A1 - Choi, Seung W. AB -

With an increase in the number of online tests, interruptions during testing due to unexpected technical issues seem unavoidable. For example, interruptions occurred during several recent state tests. When interruptions occur, it is important to determine the extent of their impact on the examinees’ scores. There is a lack of research on this topic due to the novelty of the problem. This article is an attempt to fill that void. Several methods, primarily based on propensity score matching, linear regression, and item response theory, were suggested to determine the overall impact of the interruptions on the examinees’ scores. A realistic simulation study shows that the suggested methods have satisfactory Type I error rate and power. Then the methods were applied to data from the Indiana Statewide Testing for Educational Progress-Plus (ISTEP+) test that experienced interruptions in 2013. The results indicate that the interruptions did not have a significant overall impact on the student scores for the ISTEP+ test.

VL - 51 UR - http://dx.doi.org/10.1111/jedm.12052 ER - TY - JOUR T1 - A New Stopping Rule for Computerized Adaptive Testing JF - Educational and Psychological Measurement Y1 - 2011 A1 - Choi, Seung W. A1 - Grady, Matthew W. A1 - Dodd, Barbara G. AB -

The goal of the current study was to introduce a new stopping rule for computerized adaptive testing (CAT). The predicted standard error reduction (PSER) stopping rule uses the predictive posterior variance to determine the reduction in standard error that would result from the administration of additional items. The performance of the PSER was compared with that of the minimum standard error stopping rule and a modified version of the minimum information stopping rule in a series of simulated adaptive tests, drawn from a number of item pools. Results indicate that the PSER makes efficient use of CAT item pools, administering fewer items when predictive gains in information are small and increasing measurement precision when information is abundant.

VL - 71 UR - http://epm.sagepub.com/content/71/1/37.abstract ER - TY - JOUR T1 - Comparison of CAT Item Selection Criteria for Polytomous Items JF - Applied Psychological Measurement Y1 - 2009 A1 - Choi, Seung W. A1 - Swartz, Richard J. AB -

Item selection is a core component in computerized adaptive testing (CAT). Several studies have evaluated new and classical selection methods; however, the few that have applied such methods to the use of polytomous items have reported conflicting results. To clarify these discrepancies and further investigate selection method properties, six different selection methods are compared systematically. The results showed no clear benefit from more sophisticated selection criteria and showed one method previously believed to be superior—the maximum expected posterior weighted information (MEPWI)—to be mathematically equivalent to a simpler method, the maximum posterior weighted information (MPWI).

VL - 33 UR - http://apm.sagepub.com/content/33/6/419.abstract ER -