%0 Journal Article %J Journal of Educational Measurement %D 2002 %T Data sparseness and on-line pretest item calibration-scaling methods in CAT %A Ban, J-C. %A Hanson, B. A. %A Yi, Q. %A Harris, D. J. %K Computer Assisted Testing %K Educational Measurement %K Item Response Theory %K Maximum Likelihood %K Methodology %K Scaling (Testing) %K Statistical Data %X Compared and evaluated 3 on-line pretest item calibration-scaling methods (the marginal maximum likelihood estimate with 1 expectation maximization [EM] cycle [OEM] method, the marginal maximum likelihood estimate with multiple EM cycles [MEM] method, and M. L. Stocking's Method B) in terms of item parameter recovery when the item responses to the pretest items in the pool are sparse. Simulations of computerized adaptive tests were used to evaluate the results yielded by the three methods. The MEM method produced the smallest average total error in parameter estimation, and the OEM method yielded the largest total error (PsycINFO Database Record (c) 2005 APA ) %B Journal of Educational Measurement %V 39 %P 207-218 %G eng %0 Journal Article %J Journal of Educational Measurement %D 2001 %T A comparative study of on line pretest item—Calibration/scaling methods in computerized adaptive testing %A Ban, J. C. %A Hanson, B. A. %A Wang, T. %A Yi, Q. %A Harris, D. J. %X The purpose of this study was to compare and evaluate five on-line pretest item-calibration/scaling methods in computerized adaptive testing (CAT): marginal maximum likelihood estimate with one EM cycle (OEM), marginal maximum likelihood estimate with multiple EM cycles (MEM), Stocking's Method A, Stocking's Method B, and BILOG/Prior. The five methods were evaluated in terms ofitem-parameter recovery, using three different sample sizes (300, 1000 and 3000). The MEM method appeared to be the best choice among these, because it produced the smallest parameter-estimation errors for all sample size conditions. MEM and OEM are mathematically similar, although the OEM method produced larger errors. MEM also was preferable to OEM, unless the amount of timeinvolved in iterative computation is a concern. Stocking's Method B also worked very well, but it required anchor items that either would increase test lengths or require larger sample sizes depending on test administration design. Until more appropriate ways of handling sparse data are devised, the BILOG/Prior method may not be a reasonable choice for small sample sizes. Stocking's Method A hadthe largest weighted total error, as well as a theoretical weakness (i.e., treating estimated ability as true ability); thus, there appeared to be little reason to use it %B Journal of Educational Measurement %V 38 %P 191-212 %G eng %0 Conference Paper %B Paper presented at the annual meeting of the American Educational Research Association %D 2001 %T Data sparseness and online pretest calibration/scaling methods in CAT %A Ban, J %A Hanson, B. A. %A Yi, Q. %A Harris, D. %B Paper presented at the annual meeting of the American Educational Research Association %C Seattle %G eng %0 Conference Paper %B Paper presented at the annual meeting of the American Educational Research Association %D 1999 %T Adjusting "scores" from a CAT following successful item challenges %A Wang, T. %A Yi, Q. %A Ban, J. C. %A Harris, D. J. %A Hanson, B. A. %B Paper presented at the annual meeting of the American Educational Research Association %C Montreal, Canada %G eng %0 Conference Paper %B Paper presented at the annual meeting of the American Educational Research Association %D 1998 %T Essentially unbiased Bayesian estimates in computerized adaptive testing %A Wang, T. %A Lau, C. %A Hanson, B. A. %B Paper presented at the annual meeting of the American Educational Research Association %C San Diego %G eng