%0 Journal Article %J Journal of Computerized Adaptive Testing %D 2018 %T Factors Affecting the Classification Accuracy and Average Length of a Variable-Length Cognitive Diagnostic Computerized Test %A Huebner, Alan %A Finkelman, Matthew D. %A Weissman, Alexander %B Journal of Computerized Adaptive Testing %V 6 %P 1-14 %U http://iacat.org/jcat/index.php/jcat/article/view/55/30 %N 1 %R 10.7333/1802-060101 %0 Journal Article %J Applied Psychological Measurement %D 2008 %T A Monte Carlo Approach for Adaptive Testing With Content Constraints %A Belov, Dmitry I. %A Armstrong, Ronald D. %A Weissman, Alexander %X

This article presents a new algorithm for computerized adaptive testing (CAT) when content constraints are present. The algorithm is based on shadow CAT methodology to meet content constraints but applies Monte Carlo methods and provides the following advantages over shadow CAT: (a) lower maximum item exposure rates, (b) higher utilization of the item pool, and (c) more robust ability estimates. Computer simulations with Law School Admission Test items demonstrated that the new algorithm (a) produces similar ability estimates as shadow CAT but with half the maximum item exposure rate and 100% pool utilization and (b) produces more robust estimates when a high- (or low-) ability examinee performs poorly (or well) at the beginning of the test.

%B Applied Psychological Measurement %V 32 %P 431-446 %U http://apm.sagepub.com/content/32/6/431.abstract %R 10.1177/0146621607309081 %0 Journal Article %J Educational and Psychological Measurement %D 2007 %T Mutual Information Item Selection in Adaptive Classification Testing %A Weissman, Alexander %X

A general approach for item selection in adaptive multiple-category classification tests is provided. The approach uses mutual information (MI), a special case of the Kullback-Leibler distance, or relative entropy. MI works efficiently with the sequential probability ratio test and alleviates the difficulties encountered with using other local- and global-information measures in the multiple-category classification setting. Results from simulation studies using three item selection methods, Fisher information (FI), posterior-weighted FI (FIP), and MI, are provided for an adaptive four-category classification test. Both across and within the four classification categories, it is shown that in general, MI item selection classifies the highest proportion of examinees correctly and yields the shortest test lengths. The next best performance is observed for FIP item selection, followed by FI.

%B Educational and Psychological Measurement %V 67 %P 41-58 %U http://epm.sagepub.com/content/67/1/41.abstract %R 10.1177/0013164406288164 %0 Journal Article %J Applied Psychological Measurement %D 2006 %T A Feedback Control Strategy for Enhancing Item Selection Efficiency in Computerized Adaptive Testing %A Weissman, Alexander %X

A computerized adaptive test (CAT) may be modeled as a closed-loop system, where item selection is influenced by trait level (θ) estimation and vice versa. When discrepancies exist between an examinee's estimated and true θ levels, nonoptimal item selection is a likely result. Nevertheless, examinee response behavior consistent with optimal item selection can be predicted using item response theory (IRT), without knowledge of an examinee's true θ level, yielding a specific reference point for applying an internal correcting or feedback control mechanism. Incorporating such a mechanism in a CAT is shown to be an effective strategy for increasing item selection efficiency. Results from simulation studies using maximum likelihood (ML) and modal a posteriori (MAP) trait-level estimation and Fisher information (FI) and Fisher interval information (FII) item selection are provided.

%B Applied Psychological Measurement %V 30 %P 84-99 %U http://apm.sagepub.com/content/30/2/84.abstract %R 10.1177/0146621605282774