00459nas a2200097 4500008004900000245009300049210006900142100001700211700001600228856011700244 In Press Engldsh 00aOptimizing cognitive ability measurement with multidimensional computer adaptive testing0 aOptimizing cognitive ability measurement with multidimensional c1 aMakransky, G1 aGlas, C A W uhttp://www.iacat.org/content/optimizing-cognitive-ability-measurement-multidimensional-computer-adaptive-testing00670nas a2200193 4500008004500000022001400045245007100059210006700130490000700197653002100204653002900225653002500254653003900279653001600318100001400334700002200348700002400370856008200394 2023 Engldsh a2165-659200aAn Extended Taxonomy of Variants of Computerized Adaptive Testing0 aExtended Taxonomy of Variants of Computerized Adaptive Testing0 v1010aAdaptive Testing10aevidence-centered design10aItem Response Theory10aknowledge-based model construction10amissingness1 aLevy, Roy1 aBehrens, John, T.1 aMislevy, Robert, J. uhttp://www.iacat.org/extended-taxonomy-variants-computerized-adaptive-testing01626nas a2200145 4500008003900000245008600039210006900125300001200194490000700206520117400213100001701387700001801404700001301422856004501435 2019 d00aApplication of Dimension Reduction to CAT Item Selection Under the Bifactor Model0 aApplication of Dimension Reduction to CAT Item Selection Under t a419-4340 v433 aMultidimensional computerized adaptive testing (MCAT) based on the bifactor model is suitable for tests with multidimensional bifactor measurement structures. Several item selection methods that proved to be more advantageous than the maximum Fisher information method are not practical for bifactor MCAT due to time-consuming computations resulting from high dimensionality. To make them applicable in bifactor MCAT, dimension reduction is applied to four item selection methods, which are the posterior-weighted Fisher D-optimality (PDO) and three non-Fisher information-based methods—posterior expected Kullback–Leibler information (PKL), continuous entropy (CE), and mutual information (MI). They were compared with the Bayesian D-optimality (BDO) method in terms of estimation precision. When both the general and group factors are the measurement objectives, BDO, PDO, CE, and MI perform equally well and better than PKL. When the group factors represent nuisance dimensions, MI and CE perform the best in estimating the general factor, followed by the BDO, PDO, and PKL. How the bifactor pattern and test length affect estimation accuracy was also discussed.1 aMao, Xiuzhen1 aZhang, Jiahui1 aXin, Tao uhttps://doi.org/10.1177/014662161881308601598nas a2200157 4500008003900000245012200039210006900161300001200230490000700242520104300249100002401292700001601316700002001332700002501352856006301377 2019 d00aComputerized Adaptive Testing in Early Education: Exploring the Impact of Item Position Effects on Ability Estimation0 aComputerized Adaptive Testing in Early Education Exploring the I a437-4510 v563 aAbstract Studies have shown that item difficulty can vary significantly based on the context of an item within a test form. In particular, item position may be associated with practice and fatigue effects that influence item parameter estimation. The purpose of this research was to examine the relevance of item position specifically for assessments used in early education, an area of testing that has received relatively limited psychometric attention. In an initial study, multilevel item response models fit to data from an early literacy measure revealed statistically significant increases in difficulty for items appearing later in a 20-item form. The estimated linear change in logits for an increase of 1 in position was .024, resulting in a predicted change of .46 logits for a shift from the beginning to the end of the form. A subsequent simulation study examined impacts of item position effects on person ability estimation within computerized adaptive testing. Implications and recommendations for practice are discussed.1 aAlbano, Anthony, D.1 aCai, Liuhan1 aLease, Erin, M.1 aMcConnell, Scott, R. uhttps://onlinelibrary.wiley.com/doi/abs/10.1111/jedm.1221501445nas a2200157 4500008003900000245010600039210006900145300001200214490000700226520090200233100002301135700002501158700002601183700001501209856006301224 2019 d00aEfficiency of Targeted Multistage Calibration Designs Under Practical Constraints: A Simulation Study0 aEfficiency of Targeted Multistage Calibration Designs Under Prac a121-1460 v563 aAbstract Calibration of an item bank for computer adaptive testing requires substantial resources. In this study, we investigated whether the efficiency of calibration under the Rasch model could be enhanced by improving the match between item difficulty and student ability. We introduced targeted multistage calibration designs, a design type that considers ability-related background variables and performance for assigning students to suitable items. Furthermore, we investigated whether uncertainty about item difficulty could impair the assembling of efficient designs. The results indicated that targeted multistage calibration designs were more efficient than ordinary targeted designs under optimal conditions. Limited knowledge about item difficulty reduced the efficiency of one of the two investigated targeted multistage calibration designs, whereas targeted designs were more robust.1 aBerger, Stéphanie1 aVerschoor, Angela, J1 aEggen, Theo, J. H. M.1 aMoser, Urs uhttps://onlinelibrary.wiley.com/doi/abs/10.1111/jedm.1220301969nas a2200145 4500008004100000245008700041210006900128260005500197520137700252653003201629653001801661653002401679100001701703856010301720 2017 eng d00aAdapting Linear Models for Optimal Test Design to More Complex Test Specifications0 aAdapting Linear Models for Optimal Test Design to More Complex T aNiigata, JapanbNiigata Seiryo Universityc08/20173 a
Combinatorial optimization (CO) has proven to be a very helpful approach for addressing test assembly issues and for providing solutions. Furthermore, CO has been applied for several test designs, including: (1) for the development of linear test forms; (2) for computerized adaptive testing and; (3) for multistage testing. In his seminal work, van der Linden (2006) laid out the basis for using linear models for simultaneously assembling exams and item pools in a variety of conditions: (1) for single tests and multiple tests; (2) with item sets, etc. However, for some testing programs, the number and complexity of test specifications can grow rapidly. Consequently, the mathematical representation of the test assembly problem goes beyond most approaches reported either in van der Linden’s book or in the majority of other publications related to test assembly. In this presentation, we extend van der Linden’s framework by including the concept of blocks for test specifications. We modify the usual mathematical notation of a test assembly problem by including this concept and we show how it can be applied to various test designs. Finally, we will demonstrate an implementation of this approach in a stand-alone software, called the ATASolver.
10aComplex Test Specifications10aLinear Models10aOptimal Test Design1 aMorin, Maxim uhttp://www.iacat.org/adapting-linear-models-optimal-test-design-more-complex-test-specifications-002709nas a2200157 4500008004100000245007100041210006900112260005500181520213800236653000802374653002002382653001402402100002002416700002602436856008902462 2017 eng d00aAnalysis of CAT Precision Depending on Parameters of the Item Pool0 aAnalysis of CAT Precision Depending on Parameters of the Item Po aNiigata, JapanbNiigata Seiryo Universityc08/20173 aThe purpose of this research project is to analyze the measurement precision of a latent variable depending on parameters of the item pool. The influence of the following factors is analyzed:
Factor A – range of variation of items in the pool. This factor varies on three levels with the following ranges in logits: a1 – [-3.0; +3.0], a2 - [-4.0; +4.0], a3 - [-5.0; +5.0].
Factor B – number of items in the pool. The factor varies on six levels with the following number of items for every factor: b1 - 128, b2 - 256, b3 – 512, b4 - 1024, b5 – 2048, b6 – 4096. The items are evenly distributed in each of the variation ranges.
Factor C – examinees’ proficiency varies at 30 levels (c1, c2, …, c30), which are evenly distributed in the range [-3.0; +3.0] logit.
The investigation was based on a simulation experiment within the framework of the theory of latent variables.
Response Y is the precision of measurement of examinees’ proficiency, which is calculated as the difference between the true levels of examinees’ proficiency and estimates obtained by means of adaptive testing. Three factor ANOVA was used for data processing.
The following results were obtained:
1. Factor A is significant. Ceteris paribus, the greater the range of variation of items in the pool, the higher the estimation precision is.
2. Factor B is significant. Ceteris paribus, the greater the number of items in the pool, the higher the estimation precision is.
3. Factor C is statistically insignificant at level α = .05. It means that the precision of estimation of examinees’ proficiency is the same within the range of their variation.
4. The only significant interaction among all interactions is AB. The significance of this interaction is explained by the fact that increasing the number of items in the pool decreases the effect of the range of variation of items in the pool.
10aCAT10aItem parameters10aPrecision1 aMaslak, Anatoly1 aPozdniakov, Stanislav uhttps://drive.google.com/file/d/1Bwe58kOQRgCSbB8x6OdZTDK4OIm3LQI3/view?usp=drive_web05044nas a2200145 4500008004100000245008900041210006900130260005500199520447200254653000804726653002904734100001804763700001504781856010204796 2017 eng d00aComparison of Pretest Item Calibration Methods in a Computerized Adaptive Test (CAT)0 aComparison of Pretest Item Calibration Methods in a Computerized aNiigata, JapanbNiigata Seiryo Universityc08/20173 aCalibration methods for pretest items in a computerized adaptive test (CAT) are not a new area of research inquiry. After decades of research on CAT, the fixed item parameter calibration (FIPC) method has been widely accepted and used by practitioners to address two CAT calibration issues: (a) a restricted ability range each item is exposed to, and (b) a sparse response data matrix. In FIPC, the parameters of the operational items are fixed at their original values, and multiple expectation maximization (EM) cycles are used to estimate parameters of the pretest items with prior ability distribution being updated multiple times (Ban, Hanson, Wang, Yi, & Harris, 2001; Kang & Peterson, 2009; Pommerich & Segall, 2003).
Another calibration method is the fixed person parameter calibration (FPPC) method proposed by Stocking (1988) as “Method A.” Under this approach, candidates’ ability estimates are fixed in the calibration of pretest items and they define the scale on which the parameter estimates are reported. The logic of FPPC is suitable for CAT applications because the person parameters are estimated based on operational items and available for pretest item calibration. In Stocking (1988), the FPPC was evaluated using the LOGIST computer program developed by Wood, Wingersky, and Lord (1976). He reported that “Method A” produced larger root mean square errors (RMSEs) in the middle ability range than “Method B,” which required the use of anchor items (administered non-adaptively) and linking steps to attempt to correct for the potential scale drift due to the use of imperfect ability estimates.
Since then, new commercial software tools such as BILOG-MG and flexMIRT (Cai, 2013) have been developed to handle the FPPC method with different implementations (e.g., the MH-RM algorithm with flexMIRT). The performance of the FPPC method with those new software tools, however, has rarely been researched in the literature.
In our study, we evaluated the performance of two pretest item calibration methods using flexMIRT, the new software tool. The FIPC and FPPC are compared under various CAT settings. Each simulated exam contains 75% operational items and 25% pretest items, and real item parameters are used to generate the CAT data. This study also addresses the lack of guidelines in existing CAT item calibration literature regarding population ability shift and exam length (more accurate theta estimates are expected in longer exams). Thus, this study also investigates the following four factors and their impact on parameter estimation accuracy, including: (1) candidate population changes (3 ability distributions); (2) exam length (20: 15 OP + 5 PT, 40: 30 OP + 10 PT, and 60: 45 OP + 15 PT); (3) data model fit (3PL and 3PL with fixed C), and (4) pretest item calibration sample sizes (300, 500, and 1000). This study’s findings will fill the gap in this area of research and thus provide new information on which practitioners can base their decisions when selecting a pretest calibration method for their exams.
References
Ban, J. C., Hanson, B. A., Wang, T., Yi, Q., & Harris, D. J. (2001). A comparative study of online pretest item—Calibration/scaling methods in computerized adaptive testing. Journal of Educational Measurement, 38(3), 191–212.
Cai, L. (2013). flexMIRT® Flexible Multilevel Multidimensional Item Analysis and Test Scoring (Version 2) [Computer software]. Chapel Hill, NC: Vector Psychometric Group.
Kang, T., & Petersen, N. S. (2009). Linking item parameters to a base scale (Research Report No. 2009– 2). Iowa City, IA: ACT.
Pommerich, M., & Segall, D.O. (2003, April). Calibrating CAT pools and online pretest items using marginal maximum likelihood methods. Paper presented at the annual meeting of the National Council on Measurement in Education, Chicago, IL.
Stocking, M. L. (1988). Scale drift in online calibration (Research Report No. 88–28). Princeton, NJ: Educational Testing Service.
Wood, R. L., Wingersky, M. S., & Lord, F. M. (1976). LOGIST: A computer program for estimating examinee ability and item characteristic curve parameters (RM76-6) [Computer program]. Princeton, NJ: Educational Testing Service.
10aCAT10aPretest Item Calibration1 aMeng, Huijuan1 aHan, Chris uhttp://www.iacat.org/comparison-pretest-item-calibration-methods-computerized-adaptive-test-cat-003162nas a2200181 4500008004100000245010600041210006900147260005500216520249300271653000802764653001502772653002702787100002202814700002602836700001602862700001502878856008702893 2017 eng d00aEfficiency of Targeted Multistage Calibration Designs under Practical Constraints: A Simulation Study0 aEfficiency of Targeted Multistage Calibration Designs under Prac aNiigata, JapanbNiigata Seiryo Universityc08/20173 aCalibration of an item bank for computer adaptive testing requires substantial resources. In this study, we focused on two related research questions. First, we investigated whether the efficiency of item calibration under the Rasch model could be enhanced by calibration designs that optimize the match between item difficulty and student ability (Berger, 1991). Therefore, we introduced targeted multistage calibration designs, a design type that refers to a combination of traditional targeted calibration designs and multistage designs. As such, targeted multistage calibration designs consider ability-related background variables (e.g., grade in school), as well as performance (i.e., outcome of a preceding test stage) for assigning students to suitable items.
Second, we explored how limited a priori knowledge about item difficulty affects the efficiency of both targeted calibration designs and targeted multistage calibration designs. When arranging items within a given calibration design, test developers need to know the item difficulties to locate items optimally within the design. However, usually, no empirical information about item difficulty is available before item calibration. Owing to missing empirical data, test developers might fail to assign all items to the most suitable location within a calibration design.
Both research questions were addressed in a simulation study in which we varied the calibration design, as well as the accuracy of item distribution across the different booklets or modules within each design (i.e., number of misplaced items). The results indicated that targeted multistage calibration designs were more efficient than ordinary targeted designs under optimal conditions. Especially, targeted multistage calibration designs provided more accurate estimates for very easy and 52 IACAT 2017 ABSTRACTS BOOKLET very difficult items. Limited knowledge about item difficulty during test construction impaired the efficiency of all designs. The loss of efficiency was considerably large for one of the two investigated targeted multistage calibration designs, whereas targeted designs were more robust.
References
Berger, M. P. F. (1991). On the efficiency of IRT models when applied to different sampling designs. Applied Psychological Measurement, 15(3), 293–306. doi:10.1177/014662169101500310
10aCAT10aEfficiency10aMultistage Calibration1 aBerger, Stephanie1 aVerschoor, Angela, J.1 aEggen, Theo1 aMoser, Urs uhttps://drive.google.com/file/d/1ko2LuiARKqsjL_6aupO4Pj9zgk6p_xhd/view?usp=sharing01269nas a2200301 4500008003900000022001400039245021000053210006900263260000800332300001600340490000700356520033300363100001600696700001300712700001400725700001900739700001600758700001500774700001500789700001500804700002000819700001400839700001400853700001800867700001200885700002400897856004600921 2017 d a1573-264900aThe validation of a computer-adaptive test (CAT) for assessing health-related quality of life in children and adolescents in a clinical sample: study design, methods and first results of the Kids-CAT study0 avalidation of a computeradaptive test CAT for assessing healthre cMay a1105–11170 v263 aRecently, we developed a computer-adaptive test (CAT) for assessing health-related quality of life (HRQoL) in children and adolescents: the Kids-CAT. It measures five generic HRQoL dimensions. The aims of this article were (1) to present the study design and (2) to investigate its psychometric properties in a clinical setting.1 aBarthel, D.1 aOtto, C.1 aNolte, S.1 aMeyrose, A.-K.1 aFischer, F.1 aDevine, J.1 aWalter, O.1 aMierke, A.1 aFischer, K., I.1 aThyen, U.1 aKlein, M.1 aAnkermann, T.1 aRose, M1 aRavens-Sieberer, U. uhttps://doi.org/10.1007/s11136-016-1437-901483nas a2200157 4500008003900000245004600039210004600085300001200131490000700143520104600150100001901196700002601215700001201241700001901253856005301272 2016 d00aOptimal Reassembly of Shadow Tests in CAT0 aOptimal Reassembly of Shadow Tests in CAT a469-4850 v403 aEven in the age of abundant and fast computing resources, concurrency requirements for large-scale online testing programs still put an uninterrupted delivery of computer-adaptive tests at risk. In this study, to increase the concurrency for operational programs that use the shadow-test approach to adaptive testing, we explored various strategies aiming for reducing the number of reassembled shadow tests without compromising the measurement quality. Strategies requiring fixed intervals between reassemblies, a certain minimal change in the interim ability estimate since the last assembly before triggering a reassembly, and a hybrid of the two strategies yielded substantial reductions in the number of reassemblies without degradation in the measurement accuracy. The strategies effectively prevented unnecessary reassemblies due to adapting to the noise in the early test stages. They also highlighted the practicality of the shadow-test approach by minimizing the computational load involved in its use of mixed-integer programming.1 aChoi, Seung, W1 aMoellering, Karin, T.1 aLi, Jie1 aLinden, Wim, J uhttp://apm.sagepub.com/content/40/7/469.abstract01505nas a2200157 4500008003900000022001400039245008900053210006900142300001200211490000700223520101800230100001701248700001501265700002601280856004101306 2015 d a1745-398400aA Comparison of IRT Proficiency Estimation Methods Under Adaptive Multistage Testing0 aComparison of IRT Proficiency Estimation Methods Under Adaptive a70–790 v523 aThis inquiry is an investigation of item response theory (IRT) proficiency estimators’ accuracy under multistage testing (MST). We chose a two-stage MST design that includes four modules (one at Stage 1, three at Stage 2) and three difficulty paths (low, middle, high). We assembled various two-stage MST panels (i.e., forms) by manipulating two assembly conditions in each module, such as difficulty level and module length. For each panel, we investigated the accuracy of examinees’ proficiency levels derived from seven IRT proficiency estimators. The choice of Bayesian (prior) versus non-Bayesian (no prior) estimators was of more practical significance than the choice of number-correct versus item-pattern scoring estimators. The Bayesian estimators were slightly more efficient than the non-Bayesian estimators, resulting in smaller overall error. Possible score changes caused by the use of different proficiency estimators would be nonnegligible, particularly for low- and high-performing examinees.1 aKim, Sooyeon1 aMoses, Tim1 aYoo, Hanwook, (Henry) uhttp://dx.doi.org/10.1111/jedm.1206300572nas a2200133 4500008004500000022001400045245013400059210006900193300001200262490000700274100001700281700001600298856012400314 2013 Engldsh a1530-505800aThe applicability of multidimensional computerized adaptive testing to cognitive ability measurement in organizational assessment0 aapplicability of multidimensional computerized adaptive testing a123-1390 v131 aMakransky, G1 aGlas, C A W uhttp://www.iacat.org/content/applicability-multidimensional-computerized-adaptive-testing-cognitive-ability-measurement00493nas a2200121 4500008003900000245013500039210006900174300001200243490000700255100002100262700002000283856006800303 2013 d00aThe Applicability of Multidimensional Computerized Adaptive Testing for Cognitive Ability Measurement in Organizational Assessment0 aApplicability of Multidimensional Computerized Adaptive Testing a123-1390 v131 aMakransky, Guido1 aGlas, Cees, A W uhttp://www.tandfonline.com/doi/abs/10.1080/15305058.2012.67235201583nas a2200133 4500008003900000245012700039210006900166300001200235490000700247520111200254100001701366700001301383856005301396 2013 d00aThe Application of the Monte Carlo Approach to Cognitive Diagnostic Computerized Adaptive Testing With Content Constraints0 aApplication of the Monte Carlo Approach to Cognitive Diagnostic a482-4960 v373 aThe Monte Carlo approach which has previously been implemented in traditional computerized adaptive testing (CAT) is applied here to cognitive diagnostic CAT to test the ability of this approach to address multiple content constraints. The performance of the Monte Carlo approach is compared with the performance of the modified maximum global discrimination index (MMGDI) method on simulations in which the only content constraint is on the number of items that measure each attribute. The results of the two simulation experiments show that (a) the Monte Carlo method fulfills all the test requirements and produces satisfactory measurement precision and item exposure results and (b) the Monte Carlo method outperforms the MMGDI method when the Monte Carlo method applies either the posterior-weighted Kullback–Leibler algorithm or the hybrid Kullback–Leibler information as the item selection index. Overall, the recovery rate of the knowledge states, the distribution of the item exposure, and the utilization rate of the item bank are improved when the Monte Carlo method is used.
1 aMao, Xiuzhen1 aXin, Tao uhttp://apm.sagepub.com/content/37/6/482.abstract01575nas a2200145 4500008003900000245008500039210006900124300001200193490000700205520109100212100002501303700002701328700002101355856005301376 2013 d00aUncertainties in the Item Parameter Estimates and Robust Automated Test Assembly0 aUncertainties in the Item Parameter Estimates and Robust Automat a123-1390 v373 aItem response theory parameters have to be estimated, and because of the estimation process, they do have uncertainty in them. In most large-scale testing programs, the parameters are stored in item banks, and automated test assembly algorithms are applied to assemble operational test forms. These algorithms treat item parameters as fixed values, and uncertainty is not taken into account. As a consequence, resulting tests might be off target or less informative than expected. In this article, the process of parameter estimation is described to provide insight into the causes of uncertainty in the item parameters. The consequences of uncertainty are studied. Besides, an alternative automated test assembly algorithm is presented that is robust against uncertainties in the data. Several numerical examples demonstrate the performance of the robust test assembly algorithm, and illustrate the consequences of not taking this uncertainty into account. Finally, some recommendations about the use of robust test assembly and some directions for further research are given.
1 aVeldkamp, Bernard, P1 aMatteucci, Mariagiulia1 aJong, Martijn, G uhttp://apm.sagepub.com/content/37/2/123.abstract01744nas a2200145 4500008003900000245009400039210006900133300001200202490000700214520125900221100001901480700002501499700002101524856005301545 2012 d00aBalancing Flexible Constraints and Measurement Precision in Computerized Adaptive Testing0 aBalancing Flexible Constraints and Measurement Precision in Comp a629-6480 v723 aManaging test specifications—both multiple nonstatistical constraints and flexibly defined constraints—has become an important part of designing item selection procedures for computerized adaptive tests (CATs) in achievement testing. This study compared the effectiveness of three procedures: constrained CAT, flexible modified constrained CAT, and the weighted penalty model in balancing multiple flexible constraints and maximizing measurement precision in a fixed-length CAT. The study also addressed the effect of two different test lengths—25 items and 50 items—and of including or excluding the randomesque item exposure control procedure with the three methods, all of which were found effective in selecting items that met flexible test constraints when used in the item selection process for longer tests. When the randomesque method was included to control for item exposure, the weighted penalty model and the flexible modified constrained CAT models performed better than did the constrained CAT procedure in maintaining measurement precision. When no item exposure control method was used in the item selection process, no practical difference was found in the measurement precision of each balancing method.
1 aMoyer, Eric, L1 aGalindo, Jennifer, L1 aDodd, Barbara, G uhttp://epm.sagepub.com/content/72/4/629.abstract00479nas a2200109 4500008004100000245007800041210006900119260005000188490001000238100001700248856010400265 2012 eng d00aComputerized adaptive testing in industrial and organizational psychology0 aComputerized adaptive testing in industrial and organizational p aTwente, The NetherlandsbUniversity of Twente0 vPh.D.1 aMakransky, G uhttp://www.iacat.org/content/computerized-adaptive-testing-industrial-and-organizational-psychology01496nas a2200157 4500008003900000022001400039245006400053210006400117300001400181490000700195520101300202100002401215700002001239700002401259856005501283 2012 d a1745-398400aDetecting Local Item Dependence in Polytomous Adaptive Data0 aDetecting Local Item Dependence in Polytomous Adaptive Data a127–1470 v493 aA rapidly expanding arena for item response theory (IRT) is in attitudinal and health-outcomes survey applications, often with polytomous items. In particular, there is interest in computer adaptive testing (CAT). Meeting model assumptions is necessary to realize the benefits of IRT in this setting, however. Although initial investigations of local item dependence have been studied both for polytomous items in fixed-form settings and for dichotomous items in CAT settings, there have been no publications applying local item dependence detection methodology to polytomous items in CAT despite its central importance to these applications. The current research uses a simulation study to investigate the extension of widely used pairwise statistics, Yen's Q3 Statistic and Pearson's Statistic X2, in this context. The simulation design and results are contextualized throughout with a real item bank of this type from the Patient-Reported Outcomes Measurement Information System (PROMIS).
1 aMislevy, Jessica, L1 aRupp, André, A1 aHarring, Jeffrey, R uhttp://dx.doi.org/10.1111/j.1745-3984.2012.00165.x00541nas a2200181 4500008004500000245006300045210006300108300001400171490000700185100002300192700002000215700002200235700001700257700001600274700001800290700002100308856003000329 2012 Engldsh 00aDevelopment of a computerized adaptive test for depression0 aDevelopment of a computerized adaptive test for depression a1105-11120 v691 aGibbons, Robert, D1 aWeiss, David, J1 aPilkonis, Paul, A1 aFrank, Ellen1 aMoore, Tara1 aKim, Jong Bae1 aKupfer, David, J uWWW.ARCHGENPSYCHIATRY.COM00517nas a2200109 4500008004500000245012600045210006900171100001700240700001900257700001600276856011500292 2012 Engldsh 00aImproving personality facet scores with multidimensional computerized adaptive testing: An illustration with the NEO PI-R0 aImproving personality facet scores with multidimensional compute1 aMakransky, G1 aMortensen, E L1 aGlas, C A W uhttp://www.iacat.org/content/improving-personality-facet-scores-multidimensional-computerized-adaptive-testing01899nas a2200157 4500008003900000245009300039210007100132300001200203490000700215520139800222100001401620700002101634700001601655700001701671856005301688 2012 d00aA Mixture Rasch Model–Based Computerized Adaptive Test for Latent Class Identification0 aMixture Rasch Model–Based Computerized Adaptive Test for Latent a469-4930 v363 aThis study explored a computerized adaptive test delivery algorithm for latent class identification based on the mixture Rasch model. Four item selection methods based on the Kullback–Leibler (KL) information were proposed and compared with the reversed and the adaptive KL information under simulated testing conditions. When item separation was large, all item selection methods did not differ evidently in terms of accuracy in classifying examinees into different latent classes and estimating latent ability. However, when item separation was small, two methods with class-specific ability estimates performed better than the other two methods based on a single latent ability estimate across all latent classes. The three types of KL information distributions were compared. The KL and the reversed KL information could be the same or different depending on the ability level and the item difficulty difference between latent classes. Although the KL information and the reversed KL information were different at some ability levels and item difficulty difference levels, the use of the KL, the reversed KL, or the adaptive KL information did not affect the results substantially due to the symmetric distribution of item difficulty differences between latent classes in the simulated item pools. Item pool usage and classification convergence points were examined as well.
1 aHong Jiao1 aMacready, George1 aLiu, Junhui1 aCho, Youngmi uhttp://apm.sagepub.com/content/36/6/469.abstract01165nas a2200157 4500008004100000245005700041210005600098520065200154653002100806653003400827653001500861653002500876100001300901700001500914856007800929 2011 eng d00acatR: An R Package for Computerized Adaptive Testing0 acatR An R Package for Computerized Adaptive Testing3 aComputerized adaptive testing (CAT) is an active current research field in psychometrics and educational measurement. However, there is very little software available to handle such adaptive tasks. The R package catR was developed to perform adaptive testing with as much flexibility as possible, in an attempt to provide a developmental and testing platform to the interested user. Several item-selection rules and ability estimators are implemented. The item bank can be provided by the user or randomly generated from parent distributions of item parameters. Three stopping rules are available. The output can be graphically displayed.
10acomputer program10acomputerized adaptive testing10aEstimation10aItem Response Theory1 aMagis, D1 aRaîche, G uhttp://www.iacat.org/content/catr-r-package-computerized-adaptive-testing01695nas a2200217 4500008004100000020004600041245012100087210006900208250001500277300001100292490000700303520094700310100001701257700001301274700001601287700001001303700001201313700001601325700001801341856011801359 2011 eng d a1541-3144 (Electronic)0194-2638 (Linking)00aContent range and precision of a computer adaptive test of upper extremity function for children with cerebral palsy0 aContent range and precision of a computer adaptive test of upper a2010/10/15 a90-1020 v313 aThis article reports on the content range and measurement precision of an upper extremity (UE) computer adaptive testing (CAT) platform of physical function in children with cerebral palsy. Upper extremity items representing skills of all abilities were administered to 305 parents. These responses were compared with two traditional standardized measures: Pediatric Outcomes Data Collection Instrument and Functional Independence Measure for Children. The UE CAT correlated strongly with the upper extremity component of these measures and had greater precision when describing individual functional ability. The UE item bank has wider range with items populating the lower end of the ability spectrum. This new UE item bank and CAT have the capability to quickly assess children of all ages and abilities with good precision and, most importantly, with items that are meaningful and appropriate for their age and level of physical function.1 aMontpetit, K1 aHaley, S1 aBilodeau, N1 aNi, P1 aTian, F1 aGorton, 3rd1 aMulcahey, M J uhttp://www.iacat.org/content/content-range-and-precision-computer-adaptive-test-upper-extremity-function-children00688nas a2200217 4500008004100000245006000041210005800101260001200159653001700171653001700188653000800205653001500213653002500228653003200253653001700285100001800302700002100320700001600341700002400357856008900381 2011 eng d00aPractitioner’s Approach to Identify Item Drift in CAT0 aPractitioner s Approach to Identify Item Drift in CAT c10/201110aCUSUM method10aG2 statistic10aIPA10aitem drift10aitem parameter drift10aLord's chi-square statistic10aRaju's NCDIF1 aMeng, Huijuan1 aSteinkamp, Susan1 aJones, Paul1 aMatthews-Lopez, Joy uhttp://www.iacat.org/content/practitioner%E2%80%99s-approach-identify-item-drift-cat00476nas a2200121 4500008004500000245008000045210006900125300001200194490000700206100001700213700001600230856010800246 2011 Engldsh 00aUnproctored Internet test verification: Using adaptive confirmation testing0 aUnproctored Internet test verification Using adaptive confirmati a608-6300 v141 aMakransky, G1 aGlas, C A W uhttp://www.iacat.org/content/unproctored-internet-test-verification-using-adaptive-confirmation-testing00406nas a2200109 4500008004500000245006300045210006000108490000700168100001700175700001600192856008800208 2010 Engldsh 00aAn automatic online calibration design in adaptive testing0 aautomatic online calibration design in adaptive testing0 v111 aMakransky, G1 aGlas, C A W uhttp://www.iacat.org/content/automatic-online-calibration-design-adaptive-testing-001176nas a2200145 4500008003900000245005900039210005700098300001200155490000700167520073700174100002200911700002100933700002300954856005300977 2010 d00aA Comparison of Item Selection Techniques for Testlets0 aComparison of Item Selection Techniques for Testlets a424-4370 v343 aThis study examined the performance of the maximum Fisher’s information, the maximum posterior weighted information, and the minimum expected posterior variance methods for selecting items in a computerized adaptive testing system when the items were grouped in testlets. A simulation study compared the efficiency of ability estimation among the item selection techniques under varying conditions of local-item dependency when the response model was either the three-parameter-logistic item response theory or the three-parameter-logistic testlet response theory. The item selection techniques performed similarly within any particular condition, the practical implications of which are discussed within the article.
1 aMurphy, Daniel, L1 aDodd, Barbara, G1 aVaughn, Brandon, K uhttp://apm.sagepub.com/content/34/6/424.abstract00474nas a2200121 4500008004100000245008000041210006900121300001200190100001700202700001800219700001300237856010200250 2010 eng d00aDesigning and Implementing a Multistage Adaptive Test: The Uniform CPA Exam0 aDesigning and Implementing a Multistage Adaptive Test The Unifor a167-1901 aMelican, G J1 aBreithaupt, K1 aZhang, Y uhttp://www.iacat.org/content/designing-and-implementing-multistage-adaptive-test-uniform-cpa-exam00376nas a2200109 4500008004100000245004800041210004800089300001200137100001600149700002700165856007400192 2010 eng d00aDetecting Person Misfit in Adaptive Testing0 aDetecting Person Misfit in Adaptive Testing a315-3291 aMeijer, R R1 aKrimpen-Stoop, E M L A uhttp://www.iacat.org/content/detecting-person-misfit-adaptive-testing01349nas a2200229 4500008004100000020001300041245011700054210006900171300001200240490000700252520057500259653000800834653003400842653001900876653001500895100002000910700001600930700001800946700001500964700001400979856012600993 2010 eng d a0191886900aDetection of aberrant item score patterns in computerized adaptive testing: An empirical example using the CUSUM0 aDetection of aberrant item score patterns in computerized adapti a921-9250 v483 aThe scalability of individual trait scores on a computerized adaptive test (CAT) was assessed through investigating the consistency of individual item score patterns. A sample of N = 428 persons completed a personality CAT as part of a career development procedure. To detect inconsistent item score patterns, we used a cumulative sum (CUSUM) procedure. Combined information from the CUSUM, other personality measures, and interviews showed that similar estimated trait values may have a different interpretation.Implications for computer-based assessment are discussed.10aCAT10acomputerized adaptive testing10aCUSUM approach10aperson Fit1 aEgberink, I J L1 aMeijer, R R1 aVeldkamp, B P1 aSchakel, L1 aSmid, N G uhttp://www.iacat.org/content/detection-aberrant-item-score-patterns-computerized-adaptive-testing-empirical-example-using03103nas a2200445 4500008004100000020004100041245012000082210006900202250001500271260001000286300001100296490000700307520175400314653003802068653002102106653001002127653000902137653002202146653002802168653003302196653001102229653001102240653000902251653001602260653001802276653001902294653003102313653003102344653001602375100001602391700001002407700001402417700001502431700001402446700001502460700001802475700002402493700001802517856012202535 2010 eng d a0161-8105 (Print)0161-8105 (Linking)00aDevelopment and validation of patient-reported outcome measures for sleep disturbance and sleep-related impairments0 aDevelopment and validation of patientreported outcome measures f a2010/06/17 cJun 1 a781-920 v333 aSTUDY OBJECTIVES: To develop an archive of self-report questions assessing sleep disturbance and sleep-related impairments (SRI), to develop item banks from this archive, and to validate and calibrate the item banks using classic validation techniques and item response theory analyses in a sample of clinical and community participants. DESIGN: Cross-sectional self-report study. SETTING: Academic medical center and participant homes. PARTICIPANTS: One thousand nine hundred ninety-three adults recruited from an Internet polling sample and 259 adults recruited from medical, psychiatric, and sleep clinics. INTERVENTIONS: None. MEASUREMENTS AND RESULTS: This study was part of PROMIS (Patient-Reported Outcomes Information System), a National Institutes of Health Roadmap initiative. Self-report item banks were developed through an iterative process of literature searches, collecting and sorting items, expert content review, qualitative patient research, and pilot testing. Internal consistency, convergent validity, and exploratory and confirmatory factor analysis were examined in the resulting item banks. Factor analyses identified 2 preliminary item banks, sleep disturbance and SRI. Item response theory analyses and expert content review narrowed the item banks to 27 and 16 items, respectively. Validity of the item banks was supported by moderate to high correlations with existing scales and by significant differences in sleep disturbance and SRI scores between participants with and without sleep disorders. CONCLUSIONS: The PROMIS sleep disturbance and SRI item banks have excellent measurement properties and may prove to be useful for assessing general aspects of sleep and SRI with various groups of patients and interventions.10a*Outcome Assessment (Health Care)10a*Self Disclosure10aAdult10aAged10aAged, 80 and over10aCross-Sectional Studies10aFactor Analysis, Statistical10aFemale10aHumans10aMale10aMiddle Aged10aPsychometrics10aQuestionnaires10aReproducibility of Results10aSleep Disorders/*diagnosis10aYoung Adult1 aBuysse, D J1 aYu, L1 aMoul, D E1 aGermain, A1 aStover, A1 aDodds, N E1 aJohnston, K L1 aShablesky-Cade, M A1 aPilkonis, P A uhttp://www.iacat.org/content/development-and-validation-patient-reported-outcome-measures-sleep-disturbance-and-sleep00478nas a2200109 4500008004100000245008900041210007100130300001100201100001400212700002300226856011900249 2010 eng d00aMultidimensional Adaptive Testing with Kullback–Leibler Information Item Selection0 aMultidimensional Adaptive Testing with Kullback–Leibler Informat a77-1021 aMulder, J1 avan der Linden, WJ uhttp://www.iacat.org/content/multidimensional-adaptive-testing-kullback%E2%80%93leibler-information-item-selection02757nas a2200241 4500008004100000020004600041245009400087210006900181250001500250300000800265490000600273520198300279100001502262700001602277700001402293700001302307700002002320700001902340700001702359700001302376700001402389856011202403 2010 eng d a1477-7525 (Electronic)1477-7525 (Linking)00aValidation of a computer-adaptive test to evaluate generic health-related quality of life0 aValidation of a computeradaptive test to evaluate generic health a2010/12/07 a1470 v83 aBACKGROUND: Health Related Quality of Life (HRQoL) is a relevant variable in the evaluation of health outcomes. Questionnaires based on Classical Test Theory typically require a large number of items to evaluate HRQoL. Computer Adaptive Testing (CAT) can be used to reduce tests length while maintaining and, in some cases, improving accuracy. This study aimed at validating a CAT based on Item Response Theory (IRT) for evaluation of generic HRQoL: the CAT-Health instrument. METHODS: Cross-sectional study of subjects aged over 18 attending Primary Care Centres for any reason. CAT-Health was administered along with the SF-12 Health Survey. Age, gender and a checklist of chronic conditions were also collected. CAT-Health was evaluated considering: 1) feasibility: completion time and test length; 2) content range coverage, Item Exposure Rate (IER) and test precision; and 3) construct validity: differences in the CAT-Health scores according to clinical variables and correlations between both questionnaires. RESULTS: 396 subjects answered CAT-Health and SF-12, 67.2% females, mean age (SD) 48.6 (17.7) years. 36.9% did not report any chronic condition. Median completion time for CAT-Health was 81 seconds (IQ range = 59-118) and it increased with age (p < 0.001). The median number of items administered was 8 (IQ range = 6-10). Neither ceiling nor floor effects were found for the score. None of the items in the pool had an IER of 100% and it was over 5% for 27.1% of the items. Test Information Function (TIF) peaked between levels -1 and 0 of HRQoL. Statistically significant differences were observed in the CAT-Health scores according to the number and type of conditions. CONCLUSIONS: Although domain-specific CATs exist for various areas of HRQoL, CAT-Health is one of the first IRT-based CATs designed to evaluate generic HRQoL and it has proven feasible, valid and efficient, when administered to a broad sample of individuals attending primary care settings.1 aRebollo, P1 aCastejon, I1 aCuervo, J1 aVilla, G1 aGarcia-Cueto, E1 aDiaz-Cuervo, H1 aZardain, P C1 aMuniz, J1 aAlonso, J uhttp://www.iacat.org/content/validation-computer-adaptive-test-evaluate-generic-health-related-quality-life00636nas a2200145 4500008004100000245010000041210006900141260009700210100001300307700001300320700001300333700001400346700001700360856011300377 2009 eng d00aApplications of CAT in admissions to higher education in Israel: Twenty-two years of experience0 aApplications of CAT in admissions to higher education in Israel aD. J. Weiss (Ed.), Proceedings of the 2009 GMAC Conference on Computerized Adaptive Testing.1 aGafni, N1 aCohen, Y1 aRoded, K1 aBaumer, M1 aMoshinsky, A uhttp://www.iacat.org/content/applications-cat-admissions-higher-education-israel-twenty-two-years-experience00462nas a2200097 4500008004100000245006300041210006000104260009700164100001700261856008600278 2009 eng d00aAn automatic online calibration design in adaptive testing0 aautomatic online calibration design in adaptive testing aD. J. Weiss (Ed.), Proceedings of the 2009 GMAC Conference on Computerized Adaptive Testing.1 aMakransky, G uhttp://www.iacat.org/content/automatic-online-calibration-design-adaptive-testing00492nas a2200121 4500008004100000245008500041210006900126260001800195100001700213700001600230700001600246856010800262 2009 eng d00aComparing methods to recalibrate drifting items in computerized adaptive testing0 aComparing methods to recalibrate drifting items in computerized aSan Diego, CA1 aMasters, J S1 aMuckle, T J1 aBontempo, B uhttp://www.iacat.org/content/comparing-methods-recalibrate-drifting-items-computerized-adaptive-testing00595nas a2200133 4500008004100000245008600041210006900127260009700196100001500293700001600308700001700324700001700341856010300358 2009 eng d00aA comparison of three methods of item selection for computerized adaptive testing0 acomparison of three methods of item selection for computerized a aD. J. Weiss (Ed.), Proceedings of the 2009 GMAC Conference on Computerized Adaptive Testing.1 aCosta, D R1 aKarino, C A1 aMoura, F A S1 aAndrade, D F uhttp://www.iacat.org/content/comparison-three-methods-item-selection-computerized-adaptive-testing01893nas a2200157 4500008004100000245007800041210006900119260009700188520126000285100001801545700001801563700002001581700001701601700001601618856010101634 2009 eng d00aCriterion-related validity of an innovative CAT-based personality measure0 aCriterionrelated validity of an innovative CATbased personality aD. J. Weiss (Ed.), Proceedings of the 2009 GMAC Conference on Computerized Adaptive Testing.3 aThis paper describes development and initial criterion-related validation of the PreVisor Computer Adaptive Personality Scales (PCAPS), a computerized adaptive testing-based personality measure that uses an ideal point IRT model based on forced-choice, paired-comparison responses. Based on results from a large consortium study, a composite of six PCAPS scales identified as relevant to the population of interest (first-line supervisors) had an estimated operational validity against an overall job performance criterion of ρ = .25. Uncorrected and corrected criterion-related validity results for each of the six PCAPS scales making up the composite are also reported. Because the PCAPS algorithm computes intermediate scale scores until a stopping rule is triggered, we were able to graph number of statement-pairs presented against criterion-related validities. Results showed generally monotonically increasing functions. However, asymptotic validity levels, or at least a reduction in the rate of increase in slope, were often reached after 5-7 statement-pairs were presented. In the case of the composite measure, there was some evidence that validities decreased after about six statement-pairs. A possible explanation for this is provided.1 aSchneider, RJ1 aMcLellan, R A1 aKantrowitz, T M1 aHouston, J S1 aBorman, W C uhttp://www.iacat.org/content/criterion-related-validity-innovative-cat-based-personality-measure02881nas a2200493 4500008004100000020004100041245014100082210006900223250001500292260000800307300001100315490000700326520125100333653003001584653001001614653000901624653004601633653003301679653001101712653003101723653001101754653000901765653003301774653001601807653002401823653004601847653005501893653005501948653004602003653001902049653003102068653001402099100001602113700001502129700001302144700001402157700001502171700001702186700001502203700001702218700001502235700001302250856012402263 2009 eng d a0090-5550 (Print)0090-5550 (Linking)00aDevelopment of an item bank for the assessment of depression in persons with mental illnesses and physical diseases using Rasch analysis0 aDevelopment of an item bank for the assessment of depression in a2009/05/28 cMay a186-970 v543 aOBJECTIVE: The calibration of item banks provides the basis for computerized adaptive testing that ensures high diagnostic precision and minimizes participants' test burden. The present study aimed at developing a new item bank that allows for assessing depression in persons with mental and persons with somatic diseases. METHOD: The sample consisted of 161 participants treated for a depressive syndrome, and 206 participants with somatic illnesses (103 cardiologic, 103 otorhinolaryngologic; overall mean age = 44.1 years, SD =14.0; 44.7% women) to allow for validation of the item bank in both groups. Persons answered a pool of 182 depression items on a 5-point Likert scale. RESULTS: Evaluation of Rasch model fit (infit < 1.3), differential item functioning, dimensionality, local independence, item spread, item and person separation (>2.0), and reliability (>.80) resulted in a bank of 79 items with good psychometric properties. CONCLUSIONS: The bank provides items with a wide range of content coverage and may serve as a sound basis for computerized adaptive testing applications. It might also be useful for researchers who wish to develop new fixed-length scales for the assessment of depression in specific rehabilitation settings.10aAdaptation, Psychological10aAdult10aAged10aDepressive Disorder/*diagnosis/psychology10aDiagnosis, Computer-Assisted10aFemale10aHeart Diseases/*psychology10aHumans10aMale10aMental Disorders/*psychology10aMiddle Aged10aModels, Statistical10aOtorhinolaryngologic Diseases/*psychology10aPersonality Assessment/statistics & numerical data10aPersonality Inventory/*statistics & numerical data10aPsychometrics/statistics & numerical data10aQuestionnaires10aReproducibility of Results10aSick Role1 aForkmann, T1 aBoecker, M1 aNorra, C1 aEberle, N1 aKircher, T1 aSchauerte, P1 aMischke, K1 aWesthofen, M1 aGauggel, S1 aWirtz, M uhttp://www.iacat.org/content/development-item-bank-assessment-depression-persons-mental-illnesses-and-physical-diseases00567nas a2200133 4500008004400000245013200044210007000176300001000246490000700256100001200263700001400275700001900289856012500308 2009 Germdn 00aEffekte des adaptiven Testens auf die Moti¬vation zur Testbearbeitung [Effects of adaptive testing on test taking motivation].0 aEffekte des adaptiven Testens auf die Moti¬vation zur Testbearbe a20-280 v551 aFrey, A1 aHartig, J1 aMoosbrugger, H uhttp://www.iacat.org/content/effekte-des-adaptiven-testens-auf-die-moti%C2%ACvation-zur-testbearbeitung-effects-adaptive03147nas a2200457 4500008004100000020004100041245015500082210006900237250001500306260000800321300001200329490000700341520174100348653002502089653001902114653002502133653003002158653001502188653003602203653001002239653002102249653003302270653001102303653001102314653000902325653001802334653001702352653001902369653001602388100001502404700001002419700001502429700002502444700001802469700001702487700001602504700001602520700001402536700001602550856012302566 2009 eng d a0962-9343 (Print)0962-9343 (Linking)00aMeasuring global physical health in children with cerebral palsy: Illustration of a multidimensional bi-factor model and computerized adaptive testing0 aMeasuring global physical health in children with cerebral palsy a2009/02/18 cApr a359-3700 v183 aPURPOSE: The purposes of this study were to apply a bi-factor model for the determination of test dimensionality and a multidimensional CAT using computer simulations of real data for the assessment of a new global physical health measure for children with cerebral palsy (CP). METHODS: Parent respondents of 306 children with cerebral palsy were recruited from four pediatric rehabilitation hospitals and outpatient clinics. We compared confirmatory factor analysis results across four models: (1) one-factor unidimensional; (2) two-factor multidimensional (MIRT); (3) bi-factor MIRT with fixed slopes; and (4) bi-factor MIRT with varied slopes. We tested whether the general and content (fatigue and pain) person score estimates could discriminate across severity and types of CP, and whether score estimates from a simulated CAT were similar to estimates based on the total item bank, and whether they correlated as expected with external measures. RESULTS: Confirmatory factor analysis suggested separate pain and fatigue sub-factors; all 37 items were retained in the analyses. From the bi-factor MIRT model with fixed slopes, the full item bank scores discriminated across levels of severity and types of CP, and compared favorably to external instruments. CAT scores based on 10- and 15-item versions accurately captured the global physical health scores. CONCLUSIONS: The bi-factor MIRT CAT application, especially the 10- and 15-item versions, yielded accurate global physical health scores that discriminated across known severity groups and types of CP, and correlated as expected with concurrent measures. The CATs have potential for collecting complex data on the physical health of children with CP in an efficient manner.10a*Computer Simulation10a*Health Status10a*Models, Statistical10aAdaptation, Psychological10aAdolescent10aCerebral Palsy/*physiopathology10aChild10aChild, Preschool10aFactor Analysis, Statistical10aFemale10aHumans10aMale10aMassachusetts10aPennsylvania10aQuestionnaires10aYoung Adult1 aHaley, S M1 aNi, P1 aDumas, H M1 aFragala-Pinkham, M A1 aHambleton, RK1 aMontpetit, K1 aBilodeau, N1 aGorton, G E1 aWatson, K1 aTucker, C A uhttp://www.iacat.org/content/measuring-global-physical-health-children-cerebral-palsy-illustration-multidimensional-bi01903nas a2200169 4500008004100000020004100041245008600082210006900168250001500237260000800252300001200260490000700272520131100279100001401590700002301604856010601627 2009 Eng d a0033-3123 (Print)0033-3123 (Linking)00aMultidimensional Adaptive Testing with Optimal Design Criteria for Item Selection0 aMultidimensional Adaptive Testing with Optimal Design Criteria f a2010/02/02 cJun a273-2960 v743 aSeveral criteria from the optimal design literature are examined for use with item selection in multidimensional adaptive testing. In particular, it is examined what criteria are appropriate for adaptive testing in which all abilities are intentional, some should be considered as a nuisance, or the interest is in the testing of a composite of the abilities. Both the theoretical analyses and the studies of simulated data in this paper suggest that the criteria of A-optimality and D-optimality lead to the most accurate estimates when all abilities are intentional, with the former slightly outperforming the latter. The criterion of E-optimality showed occasional erratic behavior for this case of adaptive testing, and its use is not recommended. If some of the abilities are nuisances, application of the criterion of A(s)-optimality (or D(s)-optimality), which focuses on the subset of intentional abilities is recommended. For the measurement of a linear combination of abilities, the criterion of c-optimality yielded the best results. The preferences of each of these criteria for items with specific patterns of parameter values was also assessed. It was found that the criteria differed mainly in their preferences of items with different patterns of values for their discrimination parameters.1 aMulder, J1 avan der Linden, WJ uhttp://www.iacat.org/content/multidimensional-adaptive-testing-optimal-design-criteria-item-selection02048nas a2200133 4500008004100000245006100041210005500102260010000157520152600257100001701783700001601800700001601816856008201832 2009 eng d00aThe nine lives of CAT-ASVAB: Innovations and revelations0 anine lives of CATASVAB Innovations and revelations aIn D. J. Weiss (Ed.), Proceedings of the 2009 GMAC Conference on Computerized Adaptive Testing.3 aThe Armed Services Vocational Aptitude Battery (ASVAB) is administered annually to more than one million military applicants and high school students. ASVAB scores are used to determine enlistment eligibility, assign applicants to military occupational specialties, and aid students in career exploration. The ASVAB is administered as both a paper-and-pencil (P&P) test and a computerized adaptive test (CAT). CAT-ASVAB holds the distinction of being the first large-scale adaptive test battery to be administered in a high-stakes setting. Approximately two-thirds of military applicants currently take CAT-ASVAB; long-term plans are to replace P&P-ASVAB with CAT-ASVAB at all test sites. Given CAT-ASVAB’s pedigree—approximately 20 years in development and 20 years in operational administration—much can be learned from revisiting some of the major highlights of CATASVAB history. This paper traces the progression of CAT-ASVAB through nine major phases of development including: research and evelopment of the CAT-ASVAB prototype, the initial development of psychometric procedures and item pools, initial and full-scale operational implementation, the introduction of new item pools, the introduction of Windows administration, the introduction of Internet administration, and research and development of the next generation CATASVAB. A background and history is provided for each phase, including discussions of major research and operational issues, innovative approaches and practices, and lessons learned.1 aPommerich, M1 aSegall, D O1 aMoreno, K E uhttp://www.iacat.org/content/nine-lives-cat-asvab-innovations-and-revelations00703nas a2200109 4500008004100000245020500041210006900246260011100315100002400426700001700450856012600467 2009 eng d00aProposta para a construo de um Teste Adaptativo Informatizado baseado na Teoria da Resposta ao Item (Proposal for the construction of a Computerized Adaptive Testing based on the Item Response Theory)0 aProposta para a construo de um Teste Adaptativo Informatizado ba aPoster session presented at the Congresso Brasileiro de Teoria da Resposta ao Item, Florianpolis SC Brazil1 aMoreira Junior, F J1 aAndrade, D F uhttp://www.iacat.org/content/proposta-para-construo-de-um-teste-adaptativo-informatizado-baseado-na-teoria-da-resposta-ao02435nas a2200385 4500008004100000020004100041245009300082210006900175250001500244260000800259300001100267490000700278520128100285653003201566653002701598653002001625653002901645653001001674653000901684653001901693653003401712653001101746653001101757653000901768653001601777653004601793100001501839700001001854700001501864700001101879700001201890700001401902700001601916856011701932 2009 eng d a0962-9343 (Print)0962-9343 (Linking)00aReplenishing a computerized adaptive test of patient-reported daily activity functioning0 aReplenishing a computerized adaptive test of patientreported dai a2009/03/17 cMay a461-710 v183 aPURPOSE: Computerized adaptive testing (CAT) item banks may need to be updated, but before new items can be added, they must be linked to the previous CAT. The purpose of this study was to evaluate 41 pretest items prior to including them into an operational CAT. METHODS: We recruited 6,882 patients with spine, lower extremity, upper extremity, and nonorthopedic impairments who received outpatient rehabilitation in one of 147 clinics across 13 states of the USA. Forty-one new Daily Activity (DA) items were administered along with the Activity Measure for Post-Acute Care Daily Activity CAT (DA-CAT-1) in five separate waves. We compared the scoring consistency with the full item bank, test information function (TIF), person standard errors (SEs), and content range of the DA-CAT-1 to the new CAT (DA-CAT-2) with the pretest items by real data simulations. RESULTS: We retained 29 of the 41 pretest items. Scores from the DA-CAT-2 were more consistent (ICC = 0.90 versus 0.96) than DA-CAT-1 when compared with the full item bank. TIF and person SEs were improved for persons with higher levels of DA functioning, and ceiling effects were reduced from 16.1% to 6.1%. CONCLUSIONS: Item response theory and online calibration methods were valuable in improving the DA-CAT.10a*Activities of Daily Living10a*Disability Evaluation10a*Questionnaires10a*User-Computer Interface10aAdult10aAged10aCohort Studies10aComputer-Assisted Instruction10aFemale10aHumans10aMale10aMiddle Aged10aOutcome Assessment (Health Care)/*methods1 aHaley, S M1 aNi, P1 aJette, A M1 aTao, W1 aMoed, R1 aMeyers, D1 aLudlow, L H uhttp://www.iacat.org/content/replenishing-computerized-adaptive-test-patient-reported-daily-activity-functioning01380nas a2200133 4500008003900000245013100039210006900170300001200239490000700251520087800258100002701136700003001163856005301193 2009 d00aStudying the Equivalence of Computer-Delivered and Paper-Based Administrations of the Raven Standard Progressive Matrices Test0 aStudying the Equivalence of ComputerDelivered and PaperBased Adm a855-8670 v693 aThis study investigates the effect of mode of administration of the Raven Standard Progressive Matrices test on distribution, accuracy, and meaning of raw scores. A random sample of high school students take counterbalanced paper-and-pencil and computer-based administrations of the test and answer a questionnaire surveying preferences for computer-delivered test administrations. Administration mode effect is studied with repeated measures multivariate analysis of variance, internal consistency reliability estimates, and confirmatory factor analysis approaches. Results show a lack of test mode effect on distribution, accuracy, and meaning of raw scores. Participants indicate their preferences for the computer-delivered administration of the test. The article discusses findings in light of previous studies of the Raven Standard Progressive Matrices test.
1 aArce-Ferrer, Alvaro, J1 aMartínez Guzmán, Elvira uhttp://epm.sagepub.com/content/69/5/855.abstract02865nas a2200325 4500008004100000020002700041245007400068210006900142250001500211260000800226300001100234490000700245520188300252653003202135653003202167653002502199653002302224653004802247653001102295653001102306653000902317653001602326653001902342653002702361100001502388700001502403700001002418700001202428856009902440 2008 eng d a1537-7385 (Electronic)00aAdaptive short forms for outpatient rehabilitation outcome assessment0 aAdaptive short forms for outpatient rehabilitation outcome asses a2008/09/23 cOct a842-520 v873 aOBJECTIVE: To develop outpatient Adaptive Short Forms for the Activity Measure for Post-Acute Care item bank for use in outpatient therapy settings. DESIGN: A convenience sample of 11,809 adults with spine, lower limb, upper limb, and miscellaneous orthopedic impairments who received outpatient rehabilitation in 1 of 127 outpatient rehabilitation clinics in the United States. We identified optimal items for use in developing outpatient Adaptive Short Forms based on the Basic Mobility and Daily Activities domains of the Activity Measure for Post-Acute Care item bank. Patient scores were derived from the Activity Measure for Post-Acute Care computerized adaptive testing program. Items were selected for inclusion on the Adaptive Short Forms based on functional content, range of item coverage, measurement precision, item exposure rate, and data collection burden. RESULTS: Two outpatient Adaptive Short Forms were developed: (1) an 18-item Basic Mobility Adaptive Short Form and (2) a 15-item Daily Activities Adaptive Short Form, derived from the same item bank used to develop the Activity Measure for Post-Acute Care computerized adaptive testing program. Both Adaptive Short Forms achieved acceptable psychometric properties. CONCLUSIONS: In outpatient postacute care settings where computerized adaptive testing outcome applications are currently not feasible, item response theory-derived Adaptive Short Forms provide the efficient capability to monitor patients' functional outcomes. The development of Adaptive Short Form functional outcome instruments linked by a common, calibrated item bank has the potential to create a bridge to outcome monitoring across postacute care settings and can facilitate the eventual transformation from Adaptive Short Forms to computerized adaptive testing applications easier and more acceptable to the rehabilitation community.10a*Activities of Daily Living10a*Ambulatory Care Facilities10a*Mobility Limitation10a*Treatment Outcome10aDisabled Persons/psychology/*rehabilitation10aFemale10aHumans10aMale10aMiddle Aged10aQuestionnaires10aRehabilitation Centers1 aJette, A M1 aHaley, S M1 aNi, P1 aMoed, R uhttp://www.iacat.org/content/adaptive-short-forms-outpatient-rehabilitation-outcome-assessment02582nas a2200241 4500008004100000020002200041245009000063210006900153250001500222260000800237300001100245490000700256520178300263653001502046653001502061653002502076653002902101653005002130653001102180100001602191700001902207856011402226 2008 eng d a1554-351X (Print)00aCombining computer adaptive testing technology with cognitively diagnostic assessment0 aCombining computer adaptive testing technology with cognitively a2008/08/14 cAug a808-210 v403 aA major advantage of computerized adaptive testing (CAT) is that it allows the test to home in on an examinee's ability level in an interactive manner. The aim of the new area of cognitive diagnosis is to provide information about specific content areas in which an examinee needs help. The goal of this study was to combine the benefit of specific feedback from cognitively diagnostic assessment with the advantages of CAT. In this study, three approaches to combining these were investigated: (1) item selection based on the traditional ability level estimate (theta), (2) item selection based on the attribute mastery feedback provided by cognitively diagnostic assessment (alpha), and (3) item selection based on both the traditional ability level estimate (theta) and the attribute mastery feedback provided by cognitively diagnostic assessment (alpha). The results from these three approaches were compared for theta estimation accuracy, attribute mastery estimation accuracy, and item exposure control. The theta- and alpha-based condition outperformed the alpha-based condition regarding theta estimation, attribute mastery pattern estimation, and item exposure control. Both the theta-based condition and the theta- and alpha-based condition performed similarly with regard to theta estimation, attribute mastery estimation, and item exposure control, but the theta- and alpha-based condition has an additional advantage in that it uses the shadow test method, which allows the administrator to incorporate additional constraints in the item selection process, such as content balancing, item type constraints, and so forth, and also to select items on the basis of both the current theta and alpha estimates, which can be built on top of existing 3PL testing programs.10a*Cognition10a*Computers10a*Models, Statistical10a*User-Computer Interface10aDiagnosis, Computer-Assisted/*instrumentation10aHumans1 aMcGlohen, M1 aChang, Hua-Hua uhttp://www.iacat.org/content/combining-computer-adaptive-testing-technology-cognitively-diagnostic-assessment00581nas a2200145 4500008004100000245012000041210006900161300001400230490000700244100001400251700001400265700001900279700001800298856011900316 2008 eng d00aComputerized adaptive testing for patients with knee inpairments produced valid and responsive measures of function0 aComputerized adaptive testing for patients with knee inpairments a1113-11240 v611 aHart, D L1 aWang, Y-C1 aStratford, P W1 aMioduski, J E uhttp://www.iacat.org/content/computerized-adaptive-testing-patients-knee-inpairments-produced-valid-and-responsive03313nas a2200433 4500008004100000020004600041245007700087210006900164250001500233260001100248300001200259490000700271520203200278653002702310653003002337653002102367653001002388653000902398653001502407653003602422653002102458653004402479653002402523653001102547653001102558653001302569653000902582653001602591653003002607653003002637653003102667100001502698700001302713700001502726700001402741700001502755700001402770856009502784 2008 eng d a1528-1159 (Electronic)0362-2436 (Linking)00aComputerized adaptive testing in back pain: Validation of the CAT-5D-QOL0 aComputerized adaptive testing in back pain Validation of the CAT a2008/05/23 cMay 20 a1384-900 v333 aSTUDY DESIGN: We have conducted an outcome instrument validation study. OBJECTIVE: Our objective was to develop a computerized adaptive test (CAT) to measure 5 domains of health-related quality of life (HRQL) and assess its feasibility, reliability, validity, and efficiency. SUMMARY OF BACKGROUND DATA: Kopec and colleagues have recently developed item response theory based item banks for 5 domains of HRQL relevant to back pain and suitable for CAT applications. The domains are Daily Activities (DAILY), Walking (WALK), Handling Objects (HAND), Pain or Discomfort (PAIN), and Feelings (FEEL). METHODS: An adaptive algorithm was implemented in a web-based questionnaire administration system. The questionnaire included CAT-5D-QOL (5 scales), Modified Oswestry Disability Index (MODI), Roland-Morris Disability Questionnaire (RMDQ), SF-36 Health Survey, and standard clinical and demographic information. Participants were outpatients treated for mechanical back pain at a referral center in Vancouver, Canada. RESULTS: A total of 215 patients completed the questionnaire and 84 completed a retest. On average, patients answered 5.2 items per CAT-5D-QOL scale. Reliability ranged from 0.83 (FEEL) to 0.92 (PAIN) and was 0.92 for the MODI, RMDQ, and Physical Component Summary (PCS-36). The ceiling effect was 0.5% for PAIN compared with 2% for MODI and 5% for RMQ. The CAT-5D-QOL scales correlated as anticipated with other measures of HRQL and discriminated well according to the level of satisfaction with current symptoms, duration of the last episode, sciatica, and disability compensation. The average relative discrimination index was 0.87 for PAIN, 0.67 for DAILY and 0.62 for WALK, compared with 0.89 for MODI, 0.80 for RMDQ, and 0.59 for PCS-36. CONCLUSION: The CAT-5D-QOL is feasible, reliable, valid, and efficient in patients with back pain. This methodology can be recommended for use in back pain research and should improve outcome assessment, facilitate comparisons across studies, and reduce patient burden.10a*Disability Evaluation10a*Health Status Indicators10a*Quality of Life10aAdult10aAged10aAlgorithms10aBack Pain/*diagnosis/psychology10aBritish Columbia10aDiagnosis, Computer-Assisted/*standards10aFeasibility Studies10aFemale10aHumans10aInternet10aMale10aMiddle Aged10aPredictive Value of Tests10aQuestionnaires/*standards10aReproducibility of Results1 aKopec, J A1 aBadii, M1 aMcKenna, M1 aLima, V D1 aSayre, E C1 aDvorak, M uhttp://www.iacat.org/content/computerized-adaptive-testing-back-pain-validation-cat-5d-qol01876nas a2200205 4500008003900000245005600039210005600095300001000151490000800161520124900169653002101418653003001439653002501469653001801494653002501512100001301537700001701550700002101567856008201588 2008 d00aComputerized Adaptive Testing of Personality Traits0 aComputerized Adaptive Testing of Personality Traits a12-210 v2163 aA computerized adaptive testing (CAT) procedure was simulated with ordinal polytomous personality data collected using a
conventional paper-and-pencil testing format. An adapted Dutch version of the dominance scale of Gough and Heilbrun’s Adjective
Check List (ACL) was used. This version contained Likert response scales with five categories. Item parameters were estimated using Samejima’s graded response model from the responses of 1,925 subjects. The CAT procedure was simulated using the responses of 1,517 other subjects. The value of the required standard error in the stopping rule of the CAT was manipulated. The relationship between CAT latent trait estimates and estimates based on all dominance items was studied. Additionally, the pattern of relationships between the CAT latent trait estimates and the other ACL scales was compared to that between latent trait estimates based on the entire item pool and the other ACL scales. The CAT procedure resulted in latent trait estimates qualitatively equivalent to latent trait estimates based on all items, while a substantial reduction of the number of used items could be realized (at the stopping rule of 0.4 about 33% of the 36 items was used).
This paper reports on the use of simulation when a randomization procedure is used to control item exposure in a computerized adaptive test for certification. We present a method to determine the optimum width of the interval from which items are selected and we report on the impact of relaxing the interval width on measurement precision and item exposure. Results indicate that, if the item bank is well targeted, it may be possible to widen the randomization interval and thus reduce item exposure, without seriously impacting the error of measure for test takers whose ability estimate is near the pass point.
1 aMuckle, T J1 aBergstrom, B A1 aBecker, K1 aStahl, J A uhttp://www.iacat.org/content/impact-altering-randomization-intervals-precision-measurement-and-item-exposure03428nas a2200385 4500008004100000020004100041245010600082210006900188250001500257260001200272300001000284490000700294520220300301653002702504653001502531653001002546653002102556653002402577653002802601653003802629653001102667653001102678653001102689653003902700653000902739653002402748653003102772653004002803100001802843700001502861700001302876700001702889700001402906856012202920 2008 eng d a0271-6798 (Print)0271-6798 (Linking)00aMeasuring physical functioning in children with spinal impairments with computerized adaptive testing0 aMeasuring physical functioning in children with spinal impairmen a2008/03/26 cApr-May a330-50 v283 aBACKGROUND: The purpose of this study was to assess the utility of measuring current physical functioning status of children with scoliosis and kyphosis by applying computerized adaptive testing (CAT) methods. Computerized adaptive testing uses a computer interface to administer the most optimal items based on previous responses, reducing the number of items needed to obtain a scoring estimate. METHODS: This was a prospective study of 77 subjects (0.6-19.8 years) who were seen by a spine surgeon during a routine clinic visit for progress spine deformity. Using a multidimensional version of the Pediatric Evaluation of Disability Inventory CAT program (PEDI-MCAT), we evaluated content range, accuracy and efficiency, known-group validity, concurrent validity with the Pediatric Outcomes Data Collection Instrument, and test-retest reliability in a subsample (n = 16) within a 2-week interval. RESULTS: We found the PEDI-MCAT to have sufficient item coverage in both self-care and mobility content for this sample, although most patients tended to score at the higher ends of both scales. Both the accuracy of PEDI-MCAT scores as compared with a fixed format of the PEDI (r = 0.98 for both mobility and self-care) and test-retest reliability were very high [self-care: intraclass correlation (3,1) = 0.98, mobility: intraclass correlation (3,1) = 0.99]. The PEDI-MCAT took an average of 2.9 minutes for the parents to complete. The PEDI-MCAT detected expected differences between patient groups, and scores on the PEDI-MCAT correlated in expected directions with scores from the Pediatric Outcomes Data Collection Instrument domains. CONCLUSIONS: Use of the PEDI-MCAT to assess the physical functioning status, as perceived by parents of children with complex spinal impairments, seems to be feasible and achieves accurate and efficient estimates of self-care and mobility function. Additional item development will be needed at the higher functioning end of the scale to avoid ceiling effects for older children. LEVEL OF EVIDENCE: This is a level II prospective study designed to establish the utility of computer adaptive testing as an evaluation method in a busy pediatric spine practice.10a*Disability Evaluation10aAdolescent10aChild10aChild, Preschool10aComputer Simulation10aCross-Sectional Studies10aDisabled Children/*rehabilitation10aFemale10aHumans10aInfant10aKyphosis/*diagnosis/rehabilitation10aMale10aProspective Studies10aReproducibility of Results10aScoliosis/*diagnosis/rehabilitation1 aMulcahey, M J1 aHaley, S M1 aDuffy, T1 aPengsheng, N1 aBetz, R R uhttp://www.iacat.org/content/measuring-physical-functioning-children-spinal-impairments-computerized-adaptive-testing00544nas a2200121 4500008004100000245007500041210006900116260009700185100001500282700001500297700001300312856009700325 2007 eng d00aAdaptive estimators of trait level in adaptive testing: Some proposals0 aAdaptive estimators of trait level in adaptive testing Some prop aD. J. Weiss (Ed.), Proceedings of the 2007 GMAC Conference on Computerized Adaptive Testing.1 aRaîche, G1 aBlais, J G1 aMagis, D uhttp://www.iacat.org/content/adaptive-estimators-trait-level-adaptive-testing-some-proposals01415nas a2200145 4500008003900000245012900039210006900168300001200237490000700249520089000256100002001146700002301166700002701189856005301216 2007 d00aComputerized Adaptive Testing for Polytomous Motivation Items: Administration Mode Effects and a Comparison With Short Forms0 aComputerized Adaptive Testing for Polytomous Motivation Items Ad a412-4290 v313 aIn a randomized experiment (n = 515), a computerized and a computerized adaptive test (CAT) are compared. The item pool consists of 24 polytomous motivation items. Although items are carefully selected, calibration data show that Samejima's graded response model did not fit the data optimally. A simulation study is done to assess possible consequences of model misfit. CAT efficiency was studied by a systematic comparison of the CAT with two types of conventional fixed length short forms, which are created to be good CAT competitors. Results showed no essential administration mode effects. Efficiency analyses show that CAT outperformed the short forms in almost all aspects when results are aggregated along the latent trait scale. The real and the simulated data results are very similar, which indicate that the real data results are not affected by model misfit.
1 aHol, Michiel, A1 aVorst, Harrie, C M1 aMellenbergh, Gideon, J uhttp://apm.sagepub.com/content/31/5/412.abstract01948nas a2200301 4500008004500000020001400045245012900059210006900188300001200257490000700269520093500276653002501211653002101236653002501257653003001282653003001312653001001342653001501352653002601367653002501393653002401418653001501442653001501457100001301472700001701485700002101502856012301523 2007 Engldsh a0146-621600aComputerized adaptive testing for polytomous motivation items: Administration mode effects and a comparison with short forms0 aComputerized adaptive testing for polytomous motivation items Ad a412-4290 v313 aIn a randomized experiment (n=515), a computerized and a computerized adaptive test (CAT) are compared. The item pool consists of 24 polytomous motivation items. Although items are carefully selected, calibration data show that Samejima's graded response model did not fit the data optimally. A simulation study is done to assess possible consequences of model misfit. CAT efficiency was studied by a systematic comparison of the CAT with two types of conventional fixed length short forms, which are created to be good CAT competitors. Results showed no essential administration mode effects. Efficiency analyses show that CAT outperformed the short forms in almost all aspects when results are aggregated along the latent trait scale. The real and the simulated data results are very similar, which indicate that the real data results are not affected by model misfit. (PsycINFO Database Record (c) 2007 APA ) (journal abstract)10a2220 Tests & Testing10aAdaptive Testing10aAttitude Measurement10acomputer adaptive testing10aComputer Assisted Testing10aitems10aMotivation10apolytomous motivation10aStatistical Validity10aTest Administration10aTest Forms10aTest Items1 aHol, A M1 aVorst, H C M1 aMellenbergh, G J uhttp://www.iacat.org/content/computerized-adaptive-testing-polytomous-motivation-items-administration-mode-effects-and01052nas a2200145 4500008003900000245005000039210005000089300001200139490000700151520063300158100002100791700002200812700001900834856005300853 2007 d00aComputerizing Organizational Attitude Surveys0 aComputerizing Organizational Attitude Surveys a658-6780 v673 aTwo quasi-experimental field studies were conducted to evaluate the psychometric equivalence of computerized and paper-and-pencil job satisfaction measures. The present research extends previous work in the area by providing better control of common threats to validity in quasi-experimental research on test mode effects and by evaluating a more comprehensive measurement model for job attitudes. Results of both studies demonstrated substantial equivalence of the computerized measure with the paper-and-pencil version. Implications for the practical use of computerized organizational attitude surveys are discussed.
1 aMueller, Karsten1 aLiebig, Christian1 aHattrup, Keith uhttp://epm.sagepub.com/content/67/4/658.abstract03104nas a2200445 4500008004100000020002200041245007100063210006900134250001500203300001200218490000700230520183100237653003802068653001902106653002102125653002002146653001402166653001102180653003002191653001102221653000902232653002502241653004602266653001802312653002602330100001302356700001402369700001702383700001302400700001502413700001502428700001702443700001402460700001802474700002302492700001602515700001602531700001502547856009602562 2007 eng d a0962-9343 (Print)00aIRT health outcomes data analysis project: an overview and summary0 aIRT health outcomes data analysis project an overview and summar a2007/03/14 a121-1320 v163 aBACKGROUND: In June 2004, the National Cancer Institute and the Drug Information Association co-sponsored the conference, "Improving the Measurement of Health Outcomes through the Applications of Item Response Theory (IRT) Modeling: Exploration of Item Banks and Computer-Adaptive Assessment." A component of the conference was presentation of a psychometric and content analysis of a secondary dataset. OBJECTIVES: A thorough psychometric and content analysis was conducted of two primary domains within a cancer health-related quality of life (HRQOL) dataset. RESEARCH DESIGN: HRQOL scales were evaluated using factor analysis for categorical data, IRT modeling, and differential item functioning analyses. In addition, computerized adaptive administration of HRQOL item banks was simulated, and various IRT models were applied and compared. SUBJECTS: The original data were collected as part of the NCI-funded Quality of Life Evaluation in Oncology (Q-Score) Project. A total of 1,714 patients with cancer or HIV/AIDS were recruited from 5 clinical sites. MEASURES: Items from 4 HRQOL instruments were evaluated: Cancer Rehabilitation Evaluation System-Short Form, European Organization for Research and Treatment of Cancer Quality of Life Questionnaire, Functional Assessment of Cancer Therapy and Medical Outcomes Study Short-Form Health Survey. RESULTS AND CONCLUSIONS: Four lessons learned from the project are discussed: the importance of good developmental item banks, the ambiguity of model fit results, the limits of our knowledge regarding the practical implications of model misfit, and the importance in the measurement of HRQOL of construct definition. With respect to these lessons, areas for future research are suggested. The feasibility of developing item banks for broad definitions of health is discussed.10a*Data Interpretation, Statistical10a*Health Status10a*Quality of Life10a*Questionnaires10a*Software10aFemale10aHIV Infections/psychology10aHumans10aMale10aNeoplasms/psychology10aOutcome Assessment (Health Care)/*methods10aPsychometrics10aStress, Psychological1 aCook, KF1 aTeal, C R1 aBjorner, J B1 aCella, D1 aChang, C-H1 aCrane, P K1 aGibbons, L E1 aHays, R D1 aMcHorney, C A1 aOcepek-Welikson, K1 aRaczek, A E1 aTeresi, J A1 aReeve, B B uhttp://www.iacat.org/content/irt-health-outcomes-data-analysis-project-overview-and-summary00579nas a2200181 4500008004100000245008300041210006900124300001200193490000700205100001000212700001300222700001100235700001000246700001200256700001400268700001300282856010200295 2007 eng d00aProspective evaluation of the am-pac-cat in outpatient rehabilitation settings0 aProspective evaluation of the ampaccat in outpatient rehabilitat a385-3980 v871 aJette1 aHaley, S1 aTao, W1 aNi, P1 aMoed, R1 aMeyers, D1 aZurek, M uhttp://www.iacat.org/content/prospective-evaluation-am-pac-cat-outpatient-rehabilitation-settings00420nas a2200121 4500008004100000245005700041210005500098260002200153100001600175700001300191700001900204856007500223 2006 eng d00aA comparison of online calibration methods for a CAT0 acomparison of online calibration methods for a CAT aSan Francisco, CA1 aMorgan, D L1 aWay, W D1 aAugemberg, K E uhttp://www.iacat.org/content/comparison-online-calibration-methods-cat00333nas a2200109 4500008003900000245004200039210003900081300001200120490000700132100001800139856006600157 2006 d00aAn Introduction to Multistage Testing0 aIntroduction to Multistage Testing a185-1870 v191 aMead, Alan, D uhttp://www.tandfonline.com/doi/abs/10.1207/s15324818ame1903_103119nas a2200277 4500008004100000020002200041245010900063210006900172250001500241260000800256300001200264490000700276520221700283653002902500653002002529653002502549653002102574653001502595653002802610653001102638653002502649100001702674700001502691700001202706856012302718 2006 eng d a0214-9915 (Print)00aMaximum information stratification method for controlling item exposure in computerized adaptive testing0 aMaximum information stratification method for controlling item e a2007/02/14 cFeb a156-1590 v183 aThe proposal for increasing the security in Computerized Adaptive Tests that has received most attention in recent years is the a-stratified method (AS - Chang and Ying, 1999): at the beginning of the test only items with low discrimination parameters (a) can be administered, with the values of the a parameters increasing as the test goes on. With this method, distribution of the exposure rates of the items is less skewed, while efficiency is maintained in trait-level estimation. The pseudo-guessing parameter (c), present in the three-parameter logistic model, is considered irrelevant, and is not used in the AS method. The Maximum Information Stratified (MIS) model incorporates the c parameter in the stratification of the bank and in the item-selection rule, improving accuracy by comparison with the AS, for item banks with a and b parameters correlated and uncorrelated. For both kinds of banks, the blocking b methods (Chang, Qian and Ying, 2001) improve the security of the item bank.Método de estratificación por máxima información para el control de la exposición en tests adaptativos informatizados. La propuesta para aumentar la seguridad en los tests adaptativos informatizados que ha recibido más atención en los últimos años ha sido el método a-estratificado (AE - Chang y Ying, 1999): en los momentos iniciales del test sólo pueden administrarse ítems con bajos parámetros de discriminación (a), incrementándose los valores del parámetro a admisibles según avanza el test. Con este método la distribución de las tasas de exposición de los ítems es más equilibrada, manteniendo una adecuada precisión en la medida. El parámetro de pseudoadivinación (c), presente en el modelo logístico de tres parámetros, se supone irrelevante y no se incorpora en el AE. El método de Estratificación por Máxima Información (EMI) incorpora el parámetro c a la estratificación del banco y a la regla de selección de ítems, mejorando la precisión en comparación con AE, tanto para bancos donde los parámetros a y b correlacionan como para bancos donde no. Para ambos tipos de bancos, los métodos de bloqueo de b (Chang, Qian y Ying, 2001) mejoran la seguridad del banco.10a*Artificial Intelligence10a*Microcomputers10a*Psychological Tests10a*Software Design10aAlgorithms10aChi-Square Distribution10aHumans10aLikelihood Functions1 aBarrada, J R1 aMazuela, P1 aOlea, J uhttp://www.iacat.org/content/maximum-information-stratification-method-controlling-item-exposure-computerized-adaptive02111nas a2200229 4500008004100000245013800041210006900179300001400248490000700262520127000269653003101539653003401570653002501604653001701629653001901646653002401665100001401689700001801703700001701721700001901738856012401757 2006 eng d00aSimulated computerized adaptive test for patients with lumbar spine impairments was efficient and produced valid measures of function0 aSimulated computerized adaptive test for patients with lumbar sp a947–9560 v593 aObjective: To equate physical functioning (PF) items with Back Pain Functional Scale (BPFS) items, develop a computerized adaptive test (CAT) designed to assess lumbar spine functional status (LFS) in people with lumbar spine impairments, and compare discriminant validity of LFS measures (qIRT) generated using all items analyzed with a rating scale Item Response Theory model (RSM) and measures generated using the simulated CAT (qCAT). Methods: We performed a secondary analysis of retrospective intake rehabilitation data. Results: Unidimensionality and local independence of 25 BPFS and PF items were supported. Differential item functioning was negligible for levels of symptom acuity, gender, age, and surgical history. The RSM fit the data well. A lumbar spine specific CAT was developed that was 72% more efficient than using all 25 items to estimate LFS measures. qIRT and qCAT measures did not discriminate patients by symptom acuity, age, or gender, but discriminated patients by surgical history in similar clinically logical ways. qCAT measures were as precise as qIRT measures. Conclusion: A body part specific simulated CAT developed from an LFS item bank was efficient and produced precise measures of LFS without eroding discriminant validity.10aBack Pain Functional Scale10acomputerized adaptive testing10aItem Response Theory10aLumbar spine10aRehabilitation10aTrue-score equating1 aHart, D L1 aMioduski, J E1 aWerneke, M W1 aStratford, P W uhttp://www.iacat.org/content/simulated-computerized-adaptive-test-patients-lumbar-spine-impairments-was-efficient-and-000595nas a2200145 4500008004100000245013800041210006900179300001200248490000700260100001200267700001600279700001500295700001700310856012200327 2006 eng d00aSimulated computerized adaptive test for patients with lumbar spine impairments was efficient and produced valid measures of function0 aSimulated computerized adaptive test for patients with lumbar sp a947-9560 v591 aHart, D1 aMioduski, J1 aWerenke, M1 aStratford, P uhttp://www.iacat.org/content/simulated-computerized-adaptive-test-patients-lumbar-spine-impairments-was-efficient-and02653nas a2200409 4500008004100000245013400041210006900175300001000244490000700254520123100261653002501492653003201517653003101549653001001580653000901590653002201599653003301621653001101654653001101665653000901676653001601685653002401701653003101725653004101756653004501797653006801842653006101910653003001971653002802001653002202029100001402051700001302065700001802078700001402096700001502110856011802125 2006 eng d00aSimulated computerized adaptive test for patients with shoulder impairments was efficient and produced valid measures of function0 aSimulated computerized adaptive test for patients with shoulder a290-80 v593 aBACKGROUND AND OBJECTIVE: To test unidimensionality and local independence of a set of shoulder functional status (SFS) items, develop a computerized adaptive test (CAT) of the items using a rating scale item response theory model (RSM), and compare discriminant validity of measures generated using all items (theta(IRT)) and measures generated using the simulated CAT (theta(CAT)). STUDY DESIGN AND SETTING: We performed a secondary analysis of data collected prospectively during rehabilitation of 400 patients with shoulder impairments who completed 60 SFS items. RESULTS: Factor analytic techniques supported that the 42 SFS items formed a unidimensional scale and were locally independent. Except for five items, which were deleted, the RSM fit the data well. The remaining 37 SFS items were used to generate the CAT. On average, 6 items were needed to estimate precise measures of function using the SFS CAT, compared with all 37 SFS items. The theta(IRT) and theta(CAT) measures were highly correlated (r = .96) and resulted in similar classifications of patients. CONCLUSION: The simulated SFS CAT was efficient and produced precise, clinically relevant measures of functional status with good discriminating ability.10a*Computer Simulation10a*Range of Motion, Articular10aActivities of Daily Living10aAdult10aAged10aAged, 80 and over10aFactor Analysis, Statistical10aFemale10aHumans10aMale10aMiddle Aged10aProspective Studies10aReproducibility of Results10aResearch Support, N.I.H., Extramural10aResearch Support, U.S. Gov't, Non-P.H.S.10aShoulder Dislocation/*physiopathology/psychology/rehabilitation10aShoulder Pain/*physiopathology/psychology/rehabilitation10aShoulder/*physiopathology10aSickness Impact Profile10aTreatment Outcome1 aHart, D L1 aCook, KF1 aMioduski, J E1 aTeal, C R1 aCrane, P K uhttp://www.iacat.org/content/simulated-computerized-adaptive-test-patients-shoulder-impairments-was-efficient-and02072nas a2200217 4500008004500000245013400045210006900179300001200248490000700260520127300267653003401540653004201574653002501616653001901641100001401660700001301674700001801687700001401705700001501719856012001734 2006 Engldsh 00aSimulated computerized adaptive test for patients with shoulder impairments was efficient and produced valid measures of function0 aSimulated computerized adaptive test for patients with shoulder a290-2980 v593 aBackground and Objective: To test unidimensionality and local independence of a set of shoulder functional status (SFS) items,
develop a computerized adaptive test (CAT) of the items using a rating scale item response theory model (RSM), and compare discriminant validity of measures generated using all items (qIRT) and measures generated using the simulated CAT (qCAT).
Study Design and Setting: We performed a secondary analysis of data collected prospectively during rehabilitation of 400 patients
with shoulder impairments who completed 60 SFS items.
Results: Factor analytic techniques supported that the 42 SFS items formed a unidimensional scale and were locally independent. Except for five items, which were deleted, the RSM fit the data well. The remaining 37 SFS items were used to generate the CAT. On average, 6 items on were needed to estimate precise measures of function using the SFS CAT, compared with all 37 SFS items. The qIRT and qCAT measures were highly correlated (r 5 .96) and resulted in similar classifications of patients.
Conclusion: The simulated SFS CAT was efficient and produced precise, clinically relevant measures of functional status with good
discriminating ability.
A total of 520 high school students were randomly assigned to a paper-and-pencil test (PPT), a computerized standard test (CST), or a computerized adaptive test (CAT) version of the Dutch School Attitude Questionnaire (SAQ), consisting of ordinal polytomous items. The CST administered items in the same order as the PPT. The CAT administered all items of three SAQ subscales in adaptive order using Samejima’s graded response model, so that six different stopping rule settings could be applied afterwards. School marks were used as external criteria. Results showed significant but small multivariate administration mode effects on conventional raw scores and small to medium effects on maximum likelihood latent trait estimates. When the precision of CAT latent trait estimates decreased, correlations with grade point average in general decreased. However, the magnitude of the decrease was not very large as compared to the PPT, the CST, and the CAT without the stopping rule.
1 aHol, Michiel, A1 aVorst, Harrie, C M1 aMellenbergh, Gideon, J uhttp://apm.sagepub.com/content/29/3/159.abstract02720nas a2200373 4500008004100000245017500041210006900216300001100285490000700296520137800303653003001681653003101711653001501742653001001757653000901767653002201776653003201798653004201830653001101872653003001883653001101913653005101924653003101975653003702006653000902043653001602052653004102068653004102109653002602150100001402176700001802190700001902208856011902227 2005 eng d00aSimulated computerized adaptive tests for measuring functional status were efficient with good discriminant validity in patients with hip, knee, or foot/ankle impairments0 aSimulated computerized adaptive tests for measuring functional s a629-380 v583 aBACKGROUND AND OBJECTIVE: To develop computerized adaptive tests (CATs) designed to assess lower extremity functional status (FS) in people with lower extremity impairments using items from the Lower Extremity Functional Scale and compare discriminant validity of FS measures generated using all items analyzed with a rating scale Item Response Theory model (theta(IRT)) and measures generated using the simulated CATs (theta(CAT)). METHODS: Secondary analysis of retrospective intake rehabilitation data. RESULTS: Unidimensionality of items was strong, and local independence of items was adequate. Differential item functioning (DIF) affected item calibration related to body part, that is, hip, knee, or foot/ankle, but DIF did not affect item calibration for symptom acuity, gender, age, or surgical history. Therefore, patients were separated into three body part specific groups. The rating scale model fit all three data sets well. Three body part specific CATs were developed: each was 70% more efficient than using all LEFS items to estimate FS measures. theta(IRT) and theta(CAT) measures discriminated patients by symptom acuity, age, and surgical history in similar ways. theta(CAT) measures were as precise as theta(IRT) measures. CONCLUSION: Body part-specific simulated CATs were efficient and produced precise measures of FS with good discriminant validity.10a*Health Status Indicators10aActivities of Daily Living10aAdolescent10aAdult10aAged10aAged, 80 and over10aAnkle Joint/physiopathology10aDiagnosis, Computer-Assisted/*methods10aFemale10aHip Joint/physiopathology10aHumans10aJoint Diseases/physiopathology/*rehabilitation10aKnee Joint/physiopathology10aLower Extremity/*physiopathology10aMale10aMiddle Aged10aResearch Support, N.I.H., Extramural10aResearch Support, U.S. Gov't, P.H.S.10aRetrospective Studies1 aHart, D L1 aMioduski, J E1 aStratford, P W uhttp://www.iacat.org/content/simulated-computerized-adaptive-tests-measuring-functional-status-were-efficient-good01476nas a2200193 4500008004100000245020200041210006900243300001200312490000700324520066400331653002100995653003001016653005501046653001101101653001801112100001401130700001701144856012101161 2005 eng d00aSomministrazione di test computerizzati di tipo adattivo: Un' applicazione del modello di misurazione di Rasch [Administration of computerized and adaptive tests: An application of the Rasch Model]0 aSomministrazione di test computerizzati di tipo adattivo Un appl a131-1490 v123 aThe aim of the present study is to describe the characteristics of a procedure for administering computerized and adaptive tests (Computer Adaptive Testing or CAT). Items to be asked to the individuals are interactively chosen and are selected from a "bank" in which they were previously calibrated and recorded on the basis of their difficulty level. The selection of items is performed by increasingly more accurate estimates of the examinees' ability. The building of an item-bank on Psychometrics and the implementation of this procedure allow a first validation through Monte Carlo simulations. (PsycINFO Database Record (c) 2006 APA ) (journal abstract)10aAdaptive Testing10aComputer Assisted Testing10aItem Response Theory computerized adaptive testing10aModels10aPsychometrics1 aMiceli, R1 aMolinengo, G uhttp://www.iacat.org/content/somministrazione-di-test-computerizzati-di-tipo-adattivo-un-applicazione-del-modello-di10201nas a2200553 4500008004100000245016300041210006900204300001400273490000700287520862500294653001808919653003208937100001608969700001608985700001409001700001609015700001409031700001509045700001609060700001409076700001509090700001509105700001409120700001709134700001709151700001709168700001609185700001709201700001609218700001509234700001609249700001709265700002309282700001609305700002209321700001609343700001609359700001709375700001309392700001509405700001309420700001609433700001209449700001409461700001609475700002009491700001209511856012409523 2005 eng d00aToward efficient and comprehensive measurement of the alcohol problems continuum in college students: The Brief Young Adult Alcohol Consequences Questionnaire0 aToward efficient and comprehensive measurement of the alcohol pr a1180-11890 v293 aBackground: Although a number of measures of alcohol problems in college students have been studied, the psychometric development and validation of these scales have been limited, for the most part, to methods based on classical test theory. In this study, we conducted analyses based on item response theory to select a set of items for measuring the alcohol problem severity continuum in college students that balances comprehensiveness and efficiency and is free from significant gender bias., Method: We conducted Rasch model analyses of responses to the 48-item Young Adult Alcohol Consequences Questionnaire by 164 male and 176 female college students who drank on at least a weekly basis. An iterative process using item fit statistics, item severities, item discrimination parameters, model residuals, and analysis of differential item functioning by gender was used to pare the items down to those that best fit a Rasch model and that were most efficient in discriminating among levels of alcohol problems in the sample., Results: The process of iterative Rasch model analyses resulted in a final 24-item scale with the data fitting the unidimensional Rasch model very well. The scale showed excellent distributional properties, had items adequately matched to the severity of alcohol problems in the sample, covered a full range of problem severity, and appeared highly efficient in retaining all of the meaningful variance captured by the original set of 48 items., Conclusions: The use of Rasch model analyses to inform item selection produced a final scale that, in both its comprehensiveness and its efficiency, should be a useful tool for researchers studying alcohol problems in college students. To aid interpretation of raw scores, examples of the types of alcohol problems that are likely to be experienced across a range of selected scores are provided., (C)2005Research Society on AlcoholismAn important, sometimes controversial feature of all psychological phenomena is whether they are categorical or dimensional. A conceptual and psychometric framework is described for distinguishing whether the latent structure behind manifest categories (e.g., psychiatric diagnoses, attitude groups, or stages of development) is category-like or dimension-like. Being dimension-like requires (a) within-category heterogeneity and (b) between-category quantitative differences. Being category-like requires (a) within-category homogeneity and (b) between-category qualitative differences. The relation between this classification and abrupt versus smooth differences is discussed. Hybrid structures are possible. Being category-like is itself a matter of degree; the authors offer a formalized framework to determine this degree. Empirical applications to personality disorders, attitudes toward capital punishment, and stages of cognitive development illustrate the approach., (C) 2005 by the American Psychological AssociationThe authors conducted Rasch model ( G. Rasch, 1960) analyses of items from the Young Adult Alcohol Problems Screening Test (YAAPST; S. C. Hurlbut & K. J. Sher, 1992) to examine the relative severity and ordering of alcohol problems in 806 college students. Items appeared to measure a single dimension of alcohol problem severity, covering a broad range of the latent continuum. Items fit the Rasch model well, with less severe symptoms reliably preceding more severe symptoms in a potential progression toward increasing levels of problem severity. However, certain items did not index problem severity consistently across demographic subgroups. A shortened, alternative version of the YAAPST is proposed, and a norm table is provided that allows for a linking of total YAAPST scores to expected symptom expression., (C) 2004 by the American Psychological AssociationA didactic on latent growth curve modeling for ordinal outcomes is presented. The conceptual aspects of modeling growth with ordinal variables and the notion of threshold invariance are illustrated graphically using a hypothetical example. The ordinal growth model is described in terms of 3 nested models: (a) multivariate normality of the underlying continuous latent variables (yt) and its relationship with the observed ordinal response pattern (Yt), (b) threshold invariance over time, and (c) growth model for the continuous latent variable on a common scale. Algebraic implications of the model restrictions are derived, and practical aspects of fitting ordinal growth models are discussed with the help of an empirical example and Mx script ( M. C. Neale, S. M. Boker, G. Xie, & H. H. Maes, 1999). The necessary conditions for the identification of growth models with ordinal data and the methodological implications of the model of threshold invariance are discussed., (C) 2004 by the American Psychological AssociationRecent research points toward the viability of conceptualizing alcohol problems as arrayed along a continuum. Nevertheless, modern statistical techniques designed to scale multiple problems along a continuum (latent trait modeling; LTM) have rarely been applied to alcohol problems. This study applies LTM methods to data on 110 problems reported during in-person interviews of 1,348 middle-aged men (mean age = 43) from the general population. The results revealed a continuum of severity linking the 110 problems, ranging from heavy and abusive drinking, through tolerance and withdrawal, to serious complications of alcoholism. These results indicate that alcohol problems can be arrayed along a dimension of severity and emphasize the relevance of LTM to informing the conceptualization and assessment of alcohol problems., (C) 2004 by the American Psychological AssociationItem response theory (IRT) is supplanting classical test theory as the basis for measures development. This study demonstrated the utility of IRT for evaluating DSM-IV diagnostic criteria. Data on alcohol, cannabis, and cocaine symptoms from 372 adult clinical participants interviewed with the Composite International Diagnostic Interview-Expanded Substance Abuse Module (CIDI-SAM) were analyzed with Mplus ( B. Muthen & L. Muthen, 1998) and MULTILOG ( D. Thissen, 1991) software. Tolerance and legal problems criteria were dropped because of poor fit with a unidimensional model. Item response curves, test information curves, and testing of variously constrained models suggested that DSM-IV criteria in the CIDI-SAM discriminate between only impaired and less impaired cases and may not be useful to scale case severity. IRT can be used to study the construct validity of DSM-IV diagnoses and to identify diagnostic criteria with poor performance., (C) 2004 by the American Psychological AssociationThis study examined the psychometric characteristics of an index of substance use involvement using item response theory. The sample consisted of 292 men and 140 women who qualified for a Diagnostic and Statistical Manual of Mental Disorders (3rd ed., rev.; American Psychiatric Association, 1987) substance use disorder (SUD) diagnosis and 293 men and 445 women who did not qualify for a SUD diagnosis. The results indicated that men had a higher probability of endorsing substance use compared with women. The index significantly predicted health, psychiatric, and psychosocial disturbances as well as level of substance use behavior and severity of SUD after a 2-year follow-up. Finally, this index is a reliable and useful prognostic indicator of the risk for SUD and the medical and psychosocial sequelae of drug consumption., (C) 2002 by the American Psychological AssociationComparability, validity, and impact of loss of information of a computerized adaptive administration of the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) were assessed in a sample of 140 Veterans Affairs hospital patients. The countdown method ( Butcher, Keller, & Bacon, 1985) was used to adaptively administer Scales L (Lie) and F (Frequency), the 10 clinical scales, and the 15 content scales. Participants completed the MMPI-2 twice, in 1 of 2 conditions: computerized conventional test-retest, or computerized conventional-computerized adaptive. Mean profiles and test-retest correlations across modalities were comparable. Correlations between MMPI-2 scales and criterion measures supported the validity of the countdown method, although some attenuation of validity was suggested for certain health-related items. Loss of information incurred with this mode of adaptive testing has minimal impact on test validity. Item and time savings were substantial., (C) 1999 by the American Psychological Association10aPsychometrics10aSubstance-Related Disorders1 aKahler, C W1 aStrong, D R1 aRead, J P1 aDe Boeck, P1 aWilson, M1 aActon, G S1 aPalfai, T P1 aWood, M D1 aMehta, P D1 aNeale, M C1 aFlay, B R1 aConklin, C A1 aClayton, R R1 aTiffany, S T1 aShiffman, S1 aKrueger, R F1 aNichol, P E1 aHicks, B M1 aMarkon, K E1 aPatrick, C J1 aIacono, William, G1 aMcGue, Matt1 aLangenbucher, J W1 aLabouvie, E1 aMartin, C S1 aSanjuan, P M1 aBavly, L1 aKirisci, L1 aChung, T1 aVanyukov, M1 aDunn, M1 aTarter, R1 aHandel, R W1 aBen-Porath, Y S1 aWatt, M uhttp://www.iacat.org/content/toward-efficient-and-comprehensive-measurement-alcohol-problems-continuum-college-students00540nas a2200121 4500008004100000020003800041245007000079210006500149260007200214100001600286700002700302856008900329 2005 eng d aComputerized Testing Report 97-1400aThe use of person-fit statistics in computerized adaptive testing0 ause of personfit statistics in computerized adaptive testing aNewton, PA. USAbLaw School Administration CouncilcSeptember, 20051 aMeijer, R R1 aKrimpen-Stoop, E M L A uhttp://www.iacat.org/content/use-person-fit-statistics-computerized-adaptive-testing01884nas a2200241 4500008004100000245006000041210005900101260004800160300001200208520112200220653001501342653003401357653002201391653002101413653001901434653001601453653001701469653001301486653002801499100001301527700001601540856008601556 2004 eng d00aAdaptive computerized educational systems: A case study0 aAdaptive computerized educational systems A case study aSan Diego, CA. USAbElsevier Academic Press a143-1693 a(Created by APA) Adaptive instruction describes adjustments typical of one-on-one tutoring as discussed in the college tutorial scenario. So computerized adaptive instruction refers to the use of computer software--almost always incorporating artificially intelligent services--which has been designed to adjust both the presentation of information and the form of questioning to meet the current needs of an individual learner. This chapter describes a system for Internet-delivered adaptive instruction. The author attempts to demonstrate a sharp difference between the teaching that takes place outside of the classroom in universities and the kind that is at least afforded, if not taken advantage of by many, students in a more personalized educational setting such as those in the small liberal arts colleges. The author describes a computer-based technology that allows that gap to be bridged with the advantage of at least having more highly prepared learners sitting in college classrooms. A limited range of emerging research that supports that proposition is cited. (PsycINFO Database Record (c) 2005 APA )10aArtificial10aComputer Assisted Instruction10aComputer Software10aHigher Education10aIndividualized10aInstruction10aIntelligence10aInternet10aUndergraduate Education1 aRay, R D1 aMalott, R W uhttp://www.iacat.org/content/adaptive-computerized-educational-systems-case-study00521nam a2200097 4500008004100000245010500041210006900146260006900215100001700284856012200301 2004 eng d00aThe application of cognitive diagnosis and computerized adaptive testing to a large-scale assessment0 aapplication of cognitive diagnosis and computerized adaptive tes aUnpublished doctoral dissertation, University of Texas at Austin1 aMcGlohen, MK uhttp://www.iacat.org/content/application-cognitive-diagnosis-and-computerized-adaptive-testing-large-scale-assessment00506nas a2200121 4500008004100000245009000041210006900131260001700200100001700217700001900234700001500253856011600268 2004 eng d00aCombining computer adaptive testing technology with cognitively diagnostic assessment0 aCombining computer adaptive testing technology with cognitively aSan Diego CA1 aMcGlohen, MK1 aChang, Hua-Hua1 aWills, J T uhttp://www.iacat.org/content/combining-computer-adaptive-testing-technology-cognitively-diagnostic-assessment-002593nas a2200469 4500008004100000245007200041210006900113300001000182490000600192520108600198653002501284653001001309653001501319653002101334653002201355653005901377653007001436653003301506653001101539653001101550653001301561653000901574653002701583653002201610653005501632653001901687653001501706653006601721653001801787653003701805653004101842653003001883653001301913100001501926700001301941700001801954700001501972700001401987700001402001700001302015856009502028 2004 eng d00aComputerized adaptive measurement of depression: A simulation study0 aComputerized adaptive measurement of depression A simulation stu a13-230 v43 aBackground: Efficient, accurate instruments for measuring depression are increasingly importantin clinical practice. We developed a computerized adaptive version of the Beck DepressionInventory (BDI). We examined its efficiency and its usefulness in identifying Major DepressiveEpisodes (MDE) and in measuring depression severity.Methods: Subjects were 744 participants in research studies in which each subject completed boththe BDI and the SCID. In addition, 285 patients completed the Hamilton Depression Rating Scale.Results: The adaptive BDI had an AUC as an indicator of a SCID diagnosis of MDE of 88%,equivalent to the full BDI. The adaptive BDI asked fewer questions than the full BDI (5.6 versus 21items). The adaptive latent depression score correlated r = .92 with the BDI total score and thelatent depression score correlated more highly with the Hamilton (r = .74) than the BDI total scoredid (r = .70).Conclusions: Adaptive testing for depression may provide greatly increased efficiency withoutloss of accuracy in identifying MDE or in measuring depression severity.10a*Computer Simulation10aAdult10aAlgorithms10aArea Under Curve10aComparative Study10aDepressive Disorder/*diagnosis/epidemiology/psychology10aDiagnosis, Computer-Assisted/*methods/statistics & numerical data10aFactor Analysis, Statistical10aFemale10aHumans10aInternet10aMale10aMass Screening/methods10aPatient Selection10aPersonality Inventory/*statistics & numerical data10aPilot Projects10aPrevalence10aPsychiatric Status Rating Scales/*statistics & numerical data10aPsychometrics10aResearch Support, Non-U.S. Gov't10aResearch Support, U.S. Gov't, P.H.S.10aSeverity of Illness Index10aSoftware1 aGardner, W1 aShear, K1 aKelleher, K J1 aPajer, K A1 aMammen, O1 aBuysse, D1 aFrank, E uhttp://www.iacat.org/content/computerized-adaptive-measurement-depression-simulation-study01773nas a2200193 4500008004100000245023000041210006900271300000900340490000700349520093900356653002101295653003001316653001801346653001601364653004201380100001201422700001901434856012601453 2004 eng d00aKann die Konfundierung von Konzentrationsleistung und Aktivierung durch adaptives Testen mit dern FAKT vermieden werden? [Avoiding the confounding of concentration performance and activation by adaptive testing with the FACT]0 aKann die Konfundierung von Konzentrationsleistung und Aktivierun a1-170 v253 aThe study investigates the effect of computerized adaptive testing strategies on the confounding of concentration performance with activation. A sample of 54 participants was administered 1 out of 3 versions (2 adaptive, 1 non-adaptive) of the computerized Frankfurt Adaptive Concentration Test FACT (Moosbrugger & Heyden, 1997) at three subsequent points in time. During the test administration changes in activation (electrodermal activity) were recorded. The results pinpoint a confounding of concentration performance with activation for the non-adaptive test version, but not for the adaptive test versions (p = .01). Thus, adaptive FACT testing strategies can remove the confounding of concentration performance with activation, thereby increasing the discriminant validity. In conclusion, an attention-focusing-hypothesis is formulated to explain the observed effect. (PsycINFO Database Record (c) 2005 APA ) (journal abstract)10aAdaptive Testing10aComputer Assisted Testing10aConcentration10aPerformance10aTesting computerized adaptive testing1 aFrey, A1 aMoosbrugger, H uhttp://www.iacat.org/content/kann-die-konfundierung-von-konzentrationsleistung-und-aktivierung-durch-adaptives-testen-mit00585nas a2200133 4500008004100000245010800041210006900149260003200218100002400250700001700274700001800291700001600309856012600325 2004 eng d00aA learning environment for english for academic purposes based on adaptive tests and task-based systems0 alearning environment for english for academic purposes based on b Springer Berlin Heidelberg1 aPITON-GONÇALVES, J1 aALUISIO, S M1 aMENDONCA, L H1 aNOVAES, O O uhttp://www.iacat.org/content/learning-environment-english-academic-purposes-based-adaptive-tests-and-task-based-systems-000456nas a2200109 4500008004100000245007900041210006900120260001700189100002300206700001800229856009900247 2004 eng d00aA sequential Bayesian procedure for item calibration in multistage testing0 asequential Bayesian procedure for item calibration in multistage aSan Diego CA1 avan der Linden, WJ1 aMead, Alan, D uhttp://www.iacat.org/content/sequential-bayesian-procedure-item-calibration-multistage-testing00541nas a2200181 4500008004100000245005000041210004800091300001000139490000700149653003400156100001400190700001500204700001500219700001400234700002600248700001300274856007200287 2004 eng d00aSiette: a web-based tool for adaptive testing0 aSiette a webbased tool for adaptive testing a29-610 v1410acomputerized adaptive testing1 aConejo, R1 aGuzmán, E1 aMillán, E1 aTrella, M1 aPérez-De-La-Cruz, JL1 aRíos, A uhttp://www.iacat.org/content/siette-web-based-tool-adaptive-testing01879nas a2200169 4500008004100000245013100041210006900172300001200241490000700253520122700260653003001487653002501517653001501542653001601557100001601573856012001589 2004 eng d00aUsing patterns of summed scores in paper-and-pencil tests and computer-adaptive tests to detect misfitting item score patterns0 aUsing patterns of summed scores in paperandpencil tests and comp a119-1360 v413 aTwo new methods have been proposed to determine unexpected sum scores on subtests (testlets) both for paper-and-pencil tests and computer adaptive tests. A method based on a conservative bound using the hypergeometric distribution, denoted ρ, was compared with a method where the probability for each score combination was calculated using a highest density region (HDR). Furthermore, these methods were compared with the standardized log-likelihood statistic with and without a correction for the estimated latent trait value (denoted as l-super(*)-sub(z) and l-sub(z), respectively). Data were simulated on the basis of the one-parameter logistic model, and both parametric and nonparametric logistic regression was used to obtain estimates of the latent trait. Results showed that it is important to take the trait level into account when comparing subtest scores. In a nonparametric item response theory (IRT) context, on adapted version of the HDR method was a powerful alterative to ρ. In a parametric IRT context, results showed that l-super(*)-sub(z) had the highest power when the data were simulated conditionally on the estimated latent trait level. (PsycINFO Database Record (c) 2005 APA ) (journal abstract)10aComputer Assisted Testing10aItem Response Theory10aperson Fit10aTest Scores1 aMeijer, R R uhttp://www.iacat.org/content/using-patterns-summed-scores-paper-and-pencil-tests-and-computer-adaptive-tests-detect00492nas a2200121 4500008004100000245009400041210006900135300001500204490000700219100002000226700001500246856010900261 2003 eng d00aA Bayesian method for the detection of item preknowledge in computerized adaptive testing0 aBayesian method for the detection of item preknowledge in comput a2, 121-1370 v271 aMcLeod L. D., C1 aThissen, D uhttp://www.iacat.org/content/bayesian-method-detection-item-preknowledge-computerized-adaptive-testing-001923nas a2200241 4500008004100000245009400041210006900135300001200204490000700216520110100223653002101324653001301345653003001358653005701388653000901445653003201454653002601486653002001512100001401532700001301546700001501559856010701574 2003 eng d00aA Bayesian method for the detection of item preknowledge in computerized adaptive testing0 aBayesian method for the detection of item preknowledge in comput a121-1370 v273 aWith the increased use of continuous testing in computerized adaptive testing, new concerns about test security have evolved, such as how to ensure that items in an item pool are safeguarded from theft. In this article, procedures to detect test takers using item preknowledge are explored. When test takers use item preknowledge, their item responses deviate from the underlying item response theory (IRT) model, and estimated abilities may be inflated. This deviation may be detected through the use of person-fit indices. A Bayesian posterior log odds ratio index is proposed for detecting the use of item preknowledge. In this approach to person fit, the estimated probability that each test taker has preknowledge of items is updated after each item response. These probabilities are based on the IRT parameters, a model specifying the probability that each item has been memorized, and the test taker's item responses. Simulations based on an operational computerized adaptive test (CAT) pool are used to demonstrate the use of the odds ratio index. (PsycINFO Database Record (c) 2005 APA )10aAdaptive Testing10aCheating10aComputer Assisted Testing10aIndividual Differences computerized adaptive testing10aItem10aItem Analysis (Statistical)10aMathematical Modeling10aResponse Theory1 aMcLeod, L1 aLewis, C1 aThissen, D uhttp://www.iacat.org/content/bayesian-method-detection-item-preknowledge-computerized-adaptive-testing00429nas a2200097 4500008004100000245008100041210006900122260001900191100001700210856010400227 2003 eng d00aCan We Assess Pre-K Kids With Computer-Based Tests: STAR Early Literacy Data0 aCan We Assess PreK Kids With ComputerBased Tests STAR Early Lite aSan Antonio TX1 aMcBride, J R uhttp://www.iacat.org/content/can-we-assess-pre-k-kids-computer-based-tests-star-early-literacy-data03603nas a2200121 4500008004100000245008800041210006900129300000800198490000700206520312600213100002903339856011303368 2003 eng d00aComputer-adaptive test for measuring personality factors using item response theory0 aComputeradaptive test for measuring personality factors using it a9990 v643 aThe aim of the present research was to develop a computer adaptive test with the graded response model to measure the Five Factor Model of personality attributes. In the first of three studies, simulated items and simulated examinees were used to investigate systematically the impact of several variables on the accuracy and efficiency of a computer adaptive test. Item test banks containing more items, items with greater trait discrimination, and more response options resulted in increased accuracy and efficiency of the computer adaptive test. It was also found that large stopping rule values required fewer items before stopping but had less accuracy compared to smaller stopping rule values. This demonstrated a trade-off between accuracy and efficiency such that greater measurement accuracy can be obtained at a cost of decreased test efficiency. In the second study, the archival responses of 501 participants to five 30-item test banks measuring the Five Factor Model of personality were utilized in simulations of a computer adaptive personality test. The computer adaptive test estimates of participant trait scores were highly correlated with the item response theory trait estimates, and the magnitude of the correlation was related directly to the stopping rule value with higher correlations and less measurement error being associated with smaller stopping rule values. It was also noted that the performance of the computer adaptive test was dependent on the personality factor being measured whereby Conscientiousness required the most number of items to be administered and Neuroticism required the least. The results confirmed that a simulated computer adaptive test using archival personality data could accurately and efficiently attain trait estimates. In the third study, 276 student participants selected response options with a click of a mouse in a computer adaptive personality test (CAPT) measuring the Big Five factors of the Five Factor Model of personality structure. Participant responses to alternative measures of the Big Five were also collected using conventional paper-and-pencil personality questionnaires. It was found that the CAPT obtained trait estimates that were very accurate even with very few administered items. Similarly, the CAPT trait estimates demonstrated moderate to high concurrent validity with the alternative Big Five measures, and the strength of the estimates varied as a result of the similarity of the personality items and assessment methodology. It was also found that the computer adaptive test was accurately able to detect, with relatively few items, the relations between the measured personality traits and several socially interesting variables such as smoking behavior, alcohol consumption rating, and number of dates per month. Implications of the results of this research are discussed in terms of the utility of computer adaptive testing of personality characteristics. As well, methodological limitations of the studies are noted and directions for future research are considered. (PsycINFO Database Record (c) 2004 APA, all rights reserved).1 aMacdonald, Paul Lawrence uhttp://www.iacat.org/content/computer-adaptive-test-measuring-personality-factors-using-item-response-theory00520nas a2200157 4500008004100000245007400041210006900115490000700184100001500191700001200206700001100218700001100229700001800240700001600258856008800274 2003 eng d00aA feasibility study of on-the-fly item generation in adaptive testing0 afeasibility study of onthefly item generation in adaptive testin0 v2 1 aBejar, I I1 aLawless1 aMorley1 aWagner1 aBennett R. E.1 aRevuelta, J uhttp://www.iacat.org/content/feasibility-study-fly-item-generation-adaptive-testing00520nas a2200169 4500008004100000245003700041210003700078260004900115300001400164653003400178100001800212700001300230700001700243700001200260700001500272856006300287 2003 eng d00aItem selection in polytomous CAT0 aItem selection in polytomous CAT aTokyo, JapanbPsychometric Society, Springer a207–21410acomputerized adaptive testing1 aVeldkamp, B P1 aOkada, A1 aShigenasu, K1 aKano, Y1 aMeulman, J uhttp://www.iacat.org/content/item-selection-polytomous-cat00465nas a2200109 4500008004100000245007700041210006900118260003000187100002300217700001800240856009700258 2003 eng d00aA sequential Bayes procedure for item calibration in multi-stage testing0 asequential Bayes procedure for item calibration in multistage te aManuscript in preparation1 avan der Linden, WJ1 aMead, Alan, D uhttp://www.iacat.org/content/sequential-bayes-procedure-item-calibration-multi-stage-testing01601nas a2200205 4500008004100000245009400041210006900135260001000204300001200214490000800226520086400234653003001098653000901128653003401137653001101171653003501182653004501217100001801262856011501280 2003 eng d00aTen recommendations for advancing patient-centered outcomes measurement for older persons0 aTen recommendations for advancing patientcentered outcomes measu cSep 2 a403-4090 v1393 aThe past 50 years have seen great progress in the measurement of patient-based outcomes for older populations. Most of the measures now used were created under the umbrella of a set of assumptions and procedures known as classical test theory. A recent alternative for health status assessment is item response theory. Item response theory is superior to classical test theory because it can eliminate test dependency and achieve more precise measurement through computerized adaptive testing. Computerized adaptive testing reduces test administration times and allows varied and precise estimates of ability. Several key challenges must be met before computerized adaptive testing becomes a productive reality. I discuss these challenges for the health assessment of older persons in the form of 10 "Ds": things we need to deliberate, debate, decide, and do.10a*Health Status Indicators10aAged10aGeriatric Assessment/*methods10aHumans10aPatient-Centered Care/*methods10aResearch Support, U.S. Gov't, Non-P.H.S.1 aMcHorney, C A uhttp://www.iacat.org/content/ten-recommendations-advancing-patient-centered-outcomes-measurement-older-persons01641nas a2200133 4500008004100000245008400041210006900125300001200194490000700206520114900213100002701362700001601389856010201405 2002 eng d00aDetection of person misfit in computerized adaptive tests with polytomous items0 aDetection of person misfit in computerized adaptive tests with p a164-1800 v263 aItem scores that do not fit an assumed item response theory model may cause the latent trait value to be inaccurately estimated. For a computerized adaptive test (CAT) using dichotomous items, several person-fit statistics for detecting mis.tting item score patterns have been proposed. Both for paper-and-pencil (P&P) tests and CATs, detection ofperson mis.t with polytomous items is hardly explored. In this study, the nominal and empirical null distributions ofthe standardized log-likelihood statistic for polytomous items are compared both for P&P tests and CATs. Results showed that the empirical distribution of this statistic differed from the assumed standard normal distribution for both P&P tests and CATs. Second, a new person-fit statistic based on the cumulative sum (CUSUM) procedure from statistical process control was proposed. By means ofsimulated data, critical values were determined that can be used to classify a pattern as fitting or misfitting. The effectiveness of the CUSUM to detect simulees with item preknowledge was investigated. Detection rates using the CUSUM were high for realistic numbers ofdisclosed items. 1 aKrimpen-Stoop, E M L A1 aMeijer, R R uhttp://www.iacat.org/content/detection-person-misfit-computerized-adaptive-tests-polytomous-items00325nas a2200097 4500008004100000245004300041210003900084260002200123100001700145856006500162 2002 eng d00aThe Development of STAR Early Literacy0 aDevelopment of STAR Early Literacy aDesert Springs CA1 aMcBride, J R uhttp://www.iacat.org/content/development-star-early-literacy00694nas a2200157 4500008003900000245010200039210006900141260011300210100001500323700001700338700001600355700001600371700001700387700001600404856011600420 2002 d00aA feasibility study of on-the-fly item generation in adaptive testing (GRE Board Report No 98-12)0 afeasibility study of onthefly item generation in adaptive testin aEducational Testing Service RR02-23. Princeton NJ: Educational Testing Service. Note = “{PDF file, 193 KB}1 aBejar, I I1 aLawless, R R1 aMorley, M E1 aWagner, M E1 aBennett, R E1 aRevuelta, J uhttp://www.iacat.org/content/feasibility-study-fly-item-generation-adaptive-testing-gre-board-report-no-98-12-000409nas a2200097 4500008004100000245007500041210006900116100001700185700001700202856009200219 2002 eng d00aMapping the Development of Pre-reading Skills with STAR Early Literacy0 aMapping the Development of Prereading Skills with STAR Early Lit1 aMcBride, J R1 aTardrew, S P uhttp://www.iacat.org/content/mapping-development-pre-reading-skills-star-early-literacy01631nas a2200241 4500008004100000245005900041210005800100300001200158490000700170520087100177653002101048653003401069653002801103653002001131653003201151653002501183653001501208653002701223653002201250653001601272100001601288856008501304 2002 eng d00aOutlier detection in high-stakes certification testing0 aOutlier detection in highstakes certification testing a219-2330 v393 aDiscusses recent developments of person-fit analysis in computerized adaptive testing (CAT). Methods from statistical process control are presented that have been proposed to classify an item score pattern as fitting or misfitting the underlying item response theory model in CAT Most person-fit research in CAT is restricted to simulated data. In this study, empirical data from a certification test were used. Alternatives are discussed to generate norms so that bounds can be determined to classify an item score pattern as fitting or misfitting. Using bounds determined from a sample of a high-stakes certification test, the empirical analysis showed that different types of misfit can be distinguished Further applications using statistical process control methods to detect misfitting item score patterns are discussed. (PsycINFO Database Record (c) 2005 APA )10aAdaptive Testing10acomputerized adaptive testing10aEducational Measurement10aGoodness of Fit10aItem Analysis (Statistical)10aItem Response Theory10aperson Fit10aStatistical Estimation10aStatistical Power10aTest Scores1 aMeijer, R R uhttp://www.iacat.org/content/outlier-detection-high-stakes-certification-testing00568nas a2200133 4500008004100000245012500041210006900166260001900235100001400254700001800268700001600286700001200302856012000314 2002 eng d00aThe robustness of the unidimensional 3PL IRT model when applied to two-dimensional data in computerized adaptive testing0 arobustness of the unidimensional 3PL IRT model when applied to t aNew Orleans LA1 aZhao, J C1 aMcMorris, R F1 aPruzek, R M1 aChen, R uhttp://www.iacat.org/content/robustness-unidimensional-3pl-irt-model-when-applied-two-dimensional-data-computerized00508nas a2200097 4500008004100000245008000041210006900121260010600190100001600296856009800312 2001 eng d00aApplication of data mining to response data in a computerized adaptive test0 aApplication of data mining to response data in a computerized ad aPaper presented at the Annual Meeting of the National Council on Measurement in Education, Seattle WA1 aMendez, F A uhttp://www.iacat.org/content/application-data-mining-response-data-computerized-adaptive-test02224nas a2200145 4500008004100000245011600041210006900157300001100226490000600237520165200243653003401895100001301929700001601942856012001958 2001 eng d00aAssessment in the twenty-first century: A role of computerised adaptive testing in national curriculum subjects0 aAssessment in the twentyfirst century A role of computerised ada a241-570 v53 aWith the investment of large sums of money in new technologies forschools and education authorities and the subsequent training of teachers to integrate Information and Communications Technology (ICT) into their teaching strategies, it is remarkable that the old outdated models of assessment still remain. This article highlights the current problems associated with pen-and paper-testing and offers suggestions for an innovative and new approach to assessment for the twenty-first century. Based on the principle of the 'wise examiner' a computerised adaptive testing system which measures pupils' ability against the levels of the United Kingdom National Curriculum has been developed for use in mathematics. Using constructed response items, pupils are administered a test tailored to their ability with a reliability index of 0.99. Since the software administers maximally informative questions matched to each pupil's current ability estimate, no two pupils will receive the same set of items in the same order therefore removing opportunities for plagarism and teaching to the test. All marking is automated and a journal recording the outcome of the test and highlighting the areas of difficulty for each pupil is available for printing by the teacher. The current prototype of the system can be used on a school's network however the authors envisage a day when Examination Boards or the Qualifications and Assessment Authority (QCA) will administer Government tests from a central server to all United Kingdom schools or testing centres. Results will be issued at the time of testing and opportunities for resits will become more widespr10acomputerized adaptive testing1 aCowan, P1 aMorrison, H uhttp://www.iacat.org/content/assessment-twenty-first-century-role-computerised-adaptive-testing-national-curriculum01039nas a2200145 4500008004100000245021400041210006900255300001100324490000600335520038100341100001600722700001600738700001700754856012200771 2001 eng d00aConcerns with computerized adaptive oral proficiency assessment. A commentary on "Comparing examinee attitudes Toward computer-assisted and other oral proficient assessments": Response to the Norris Commentary0 aConcerns with computerized adaptive oral proficiency assessment a95-1080 v53 aResponds to an article on computerized adaptive second language (L2) testing, expressing concerns about the appropriateness of such tests for informing language educators about the language skills of L2 learners and users and fulfilling the intended purposes and achieving the desired consequences of language test use.The authors of the original article respond. (Author/VWL)1 aNorris, J M1 aKenyon, D M1 aMalabonga, V uhttp://www.iacat.org/content/concerns-computerized-adaptive-oral-proficiency-assessment-commentary-comparing-examinee00425nas a2200121 4500008004100000245005900041210005700100300001200157490000700169100002700176700001600203856008400219 2001 eng d00aCUSUM-based person-fit statistics for adaptive testing0 aCUSUMbased personfit statistics for adaptive testing a199-2180 v261 aKrimpen-Stoop, E M L A1 aMeijer, R R uhttp://www.iacat.org/content/cusum-based-person-fit-statistics-adaptive-testing02101nas a2200337 4500008004100000245014400041210006900185300001200254490000700266520096600273653002501239653003601264653002501300653001001325653003001335653001101365653001001376653000901386653003101395653003201426653003601458653003401494653002001528100001601548700001401564700001601578700001901594700001301613700001501626856012201641 2001 eng d00aAn examination of the comparative reliability, validity, and accuracy of performance ratings made using computerized adaptive rating scales0 aexamination of the comparative reliability validity and accuracy a965-9730 v863 aThis laboratory research compared the reliability, validity, and accuracy of a computerized adaptive rating scale (CARS) format and 2 relatively common and representative rating formats. The CARS is a paired-comparison rating task that uses adaptive testing principles to present pairs of scaled behavioral statements to the rater to iteratively estimate a ratee's effectiveness on 3 dimensions of contextual performance. Videotaped vignettes of 6 office workers were prepared, depicting prescripted levels of contextual performance, and 112 subjects rated these vignettes using the CARS format and one or the other competing format. Results showed 23%-37% lower standard errors of measurement for the CARS format. In addition, validity was significantly higher for the CARS format (d = .18), and Cronbach's accuracy coefficients showed significantly higher accuracy, with a median effect size of .08. The discussion focuses on possible reasons for the results.10a*Computer Simulation10a*Employee Performance Appraisal10a*Personnel Selection10aAdult10aAutomatic Data Processing10aFemale10aHuman10aMale10aReproducibility of Results10aSensitivity and Specificity10aSupport, U.S. Gov't, Non-P.H.S.10aTask Performance and Analysis10aVideo Recording1 aBorman, W C1 aBuck, D E1 aHanson, M A1 aMotowidlo, S J1 aStark, S1 aDrasgow, F uhttp://www.iacat.org/content/examination-comparative-reliability-validity-and-accuracy-performance-ratings-made-using00401nas a2200097 4500008004100000245007100041210006900112260001500181100001600196856009100212 2001 eng d00aMethods to test invariant ability across subgroups of items in CAT0 aMethods to test invariant ability across subgroups of items in C aSeattle WA1 aMeijer, R R uhttp://www.iacat.org/content/methods-test-invariant-ability-across-subgroups-items-cat01859nas a2200193 4500008004100000245012400041210007100165300001200236490000700248520110700255653002101362653002601383653002201409653001401431653005901445100001601504700001701520856012801537 2001 eng d00aNouveaux développements dans le domaine du testing informatisé [New developments in the area of computerized testing]0 aNouveaux développements dans le domaine du testing informatisé N a221-2300 v463 aL'usage de l'évaluation assistée par ordinateur s'est fortement développé depuis la première formulation de ses principes de base dans les années soixante et soixante-dix. Cet article offre une introduction aux derniers développements dans le domaine de l'évaluation assistée par ordinateur, en particulier celui du testing adaptative informatisée (TAI). L'estimation de l'aptitude, la sélection des items et le développement d'une base d'items dans le cas du TAI sont discutés. De plus, des exemples d'utilisations innovantes de l'ordinateur dans des systèmes intégrés de testing et de testing via Internet sont présentés. L'article se termine par quelques illustrations de nouvelles applications du testing informatisé et des suggestions pour des recherches futures.Discusses the latest developments in computerized psychological assessment, with emphasis on computerized adaptive testing (CAT). Ability estimation, item selection, and item pool development in CAT are described. Examples of some innovative approaches to CAT are presented. (PsycINFO Database Record (c) 2005 APA )10aAdaptive Testing10aComputer Applications10aComputer Assisted10aDiagnosis10aPsychological Assessment computerized adaptive testing1 aMeijer, R R1 aGrégoire, J uhttp://www.iacat.org/content/nouveaux-d%C3%A9veloppements-dans-le-domaine-du-testing-informatis%C3%A9-new-developments-area00722nas a2200145 4500008004100000245020100041210006900242260005700311100001700368700001700385700001400402700002000416700001600436856012400452 2001 eng d00aTesting via the Internet: A literature review and analysis of issues for Department of Defense Internet testing of the Armed Services Vocational Aptitude Battery (ASVAB) in high schools (FR-01-12)0 aTesting via the Internet A literature review and analysis of iss aAlexandria VA: Human Resources Research Organization1 aMcBride, J R1 aPaddock, A F1 aWise, L L1 aStrickland, W J1 aWaters, B K uhttp://www.iacat.org/content/testing-internet-literature-review-and-analysis-issues-department-defense-internet-testing01591nas a2200205 4500008004100000245016500041210006900206300001200275490000700287520076900294653002101063653002601084653003001110653002501140653005101165100001301216700001701229700002101246856011801267 2001 eng d00aToepassing van een computergestuurde adaptieve testprocedure op persoonlijkheidsdata [Application of a computerised adaptive test procedure on personality data]0 aToepassing van een computergestuurde adaptieve testprocedure op a119-1330 v563 aStudied the applicability of a computerized adaptive testing procedure to an existing personality questionnaire within the framework of item response theory. The procedure was applied to the scores of 1,143 male and female university students (mean age 21.8 yrs) in the Netherlands on the Neuroticism scale of the Amsterdam Biographical Questionnaire (G. J. Wilde, 1963). The graded response model (F. Samejima, 1969) was used. The quality of the adaptive test scores was measured based on their correlation with test scores for the entire item bank and on their correlation with scores on other scales from the personality test. The results indicate that computerized adaptive testing can be applied to personality scales. (PsycINFO Database Record (c) 2005 APA )10aAdaptive Testing10aComputer Applications10aComputer Assisted Testing10aPersonality Measures10aTest Reliability computerized adaptive testing1 aHol, A M1 aVorst, H C M1 aMellenbergh, G J uhttp://www.iacat.org/content/toepassing-van-een-computergestuurde-adaptieve-testprocedure-op-persoonlijkheidsdata00587nam a2200181 4500008004100000245005800041210005500099260005100154100001100205700001400216700001600230700001600246700001400262700001500276700001700291700001500308856008200323 2000 eng d00aComputerized adaptive testing: A primer (2nd edition)0 aComputerized adaptive testing A primer 2nd edition aHillsdale, N. J. : Lawrence Erlbaum Associates1 aWainer1 aDorans, N1 aEignor, D R1 aFlaugher, R1 aGreen, BF1 aMislevy, R1 aSteinberg, L1 aThissen, D uhttp://www.iacat.org/content/computerized-adaptive-testing-primer-2nd-edition00575nas a2200133 4500008004100000245009300041210006900134260004900203300001200252653001500264100002700279700001600306856011900322 2000 eng d00aDetecting person misfit in adaptive testing using statistical process control techniques0 aDetecting person misfit in adaptive testing using statistical pr aDordrecht, The NetherlandsbKluwer Academic. a201-21910aperson Fit1 aKrimpen-Stoop, E M L A1 aMeijer, R R uhttp://www.iacat.org/content/detecting-person-misfit-adaptive-testing-using-statistical-process-control-techniques00604nas a2200109 4500008004100000245009300041210006900134260012700203100002700330700001600357856012100373 2000 eng d00aDetecting person misfit in adaptive testing using statistical process control techniques0 aDetecting person misfit in adaptive testing using statistical pr aW. J. van der Linden, and C. A. W. Glas (Editors). Computerized Adaptive Testing: Theory and Practice. Norwell MA: Kluwer.1 aKrimpen-Stoop, E M L A1 aMeijer, R R uhttp://www.iacat.org/content/detecting-person-misfit-adaptive-testing-using-statistical-process-control-techniques-000518nas a2200109 4500008004100000245012100041210006900162260001900231100001700250700001600267856012500283 2000 eng d00aDetecting test-takers who have memorized items in computerized-adaptive testing and muti-stage testing: A comparison0 aDetecting testtakers who have memorized items in computerizedada aNew Orleans LA1 aPatsula, L N1 aMcLeod, L D uhttp://www.iacat.org/content/detecting-test-takers-who-have-memorized-items-computerized-adaptive-testing-and-muti-stage00643nas a2200109 4500008004100000245011000041210006900151260014400220100002700364700001600391856012600407 2000 eng d00aDetection of person misfit in computerized adaptive testing with polytomous items (Research Report 00-01)0 aDetection of person misfit in computerized adaptive testing with aEnschede, The Netherlands: University of Twente, Faculty of Educational Science and Technology, Department of Measurement and Data Analysis1 aKrimpen-Stoop, E M L A1 aMeijer, R R uhttp://www.iacat.org/content/detection-person-misfit-computerized-adaptive-testing-polytomous-items-research-report-00-0100407nas a2200121 4500008004100000245005400041210005300095300001200148490000700160100001700167700001900184856008200203 2000 eng d00aDoes adaptive testing violate local independence?0 aDoes adaptive testing violate local independence a149-1560 v651 aMislevy, R J1 aChang, Hua-Hua uhttp://www.iacat.org/content/does-adaptive-testing-violate-local-independence00512nas a2200109 4500008004100000245005500041210005000096260014700146100001500293700001500308856007900323 2000 eng d00aThe GRE computer adaptive test: Operational issues0 aGRE computer adaptive test Operational issues aW. J. van der Linden and C. A. W. Glas (Eds.), Computerized adaptive testing: Theory and practice (pp. 75-99). Dordrecht, Netherlands: Kluwer.1 aMills, C N1 aSteffen, M uhttp://www.iacat.org/content/gre-computer-adaptive-test-operational-issues01774nas a2200289 4500008004100000245007700041210006900118300001400187490000700201520080000208653002501008653003101033653003701064653003801101653001901139653001001158653002701168653004601195653002001241653002801261653003201289653001801321100001401339700001701353700001501370856009901385 2000 eng d00aItem response theory and health outcomes measurement in the 21st century0 aItem response theory and health outcomes measurement in the 21st aII28-II420 v383 aItem response theory (IRT) has a number of potential advantages over classical test theory in assessing self-reported health outcomes. IRT models yield invariant item and latent trait estimates (within a linear transformation), standard errors conditional on trait level, and trait estimates anchored to item content. IRT also facilitates evaluation of differential item functioning, inclusion of items with different response formats in the same scale, and assessment of person fit and is ideally suited for implementing computer adaptive testing. Finally, IRT methods can be helpful in developing better health outcome measures and in assessing change over time. These issues are reviewed, along with a discussion of some of the methodological and practical challenges in applying IRT methods.10a*Models, Statistical10aActivities of Daily Living10aData Interpretation, Statistical10aHealth Services Research/*methods10aHealth Surveys10aHuman10aMathematical Computing10aOutcome Assessment (Health Care)/*methods10aResearch Design10aSupport, Non-U.S. Gov't10aSupport, U.S. Gov't, P.H.S.10aUnited States1 aHays, R D1 aMorales, L S1 aReise, S P uhttp://www.iacat.org/content/item-response-theory-and-health-outcomes-measurement-21st-century00486nas a2200121 4500008004100000245008700041210006900128300001200197490000700209100002700216700001600243856010500259 2000 eng d00aThe null distribution of person-fit statistics for conventional and adaptive tests0 anull distribution of personfit statistics for conventional and a a327-3450 v231 aKrimpen-Stoop, E M L A1 aMeijer, R R uhttp://www.iacat.org/content/null-distribution-person-fit-statistics-conventional-and-adaptive-tests00546nas a2200133 4500008004100000245005900041210005900100260009900159100001400258700001400272700002700286700001400313856008500327 2000 eng d00aUsing Bayesian Networks in Computerized Adaptive Tests0 aUsing Bayesian Networks in Computerized Adaptive Tests aM. Ortega and J. Bravo (Eds.),Computers and Education in the 21st Century. Kluwer, pp. 217228.1 aMillan, E1 aTrella, M1 aPerez-de-la-Cruz, J -L1 aConejo, R uhttp://www.iacat.org/content/using-bayesian-networks-computerized-adaptive-tests00441nas a2200109 4500008004100000245007500041210006900116260002100185100001100206700001300217856010100230 1999 eng d00aAdjusting computer adaptive test starting points to conserve item pool0 aAdjusting computer adaptive test starting points to conserve ite aMontreal, Canada1 aZhu, D1 aM., Fan. uhttp://www.iacat.org/content/adjusting-computer-adaptive-test-starting-points-conserve-item-pool00512nas a2200121 4500008004100000245009600041210006900137300000900206490000700215653003400222100002300256856011100279 1999 eng d00aAlternative methods for the detection of item preknowledge in computerized adaptive testing0 aAlternative methods for the detection of item preknowledge in co a37650 v5910acomputerized adaptive testing1 aMcLeod, Lori Davis uhttp://www.iacat.org/content/alternative-methods-detection-item-preknowledge-computerized-adaptive-testing00915nas a2200145 4500008004100000245006100041210006000102300001100162490000700173520043400180653003400614100001600648700001600664856008900680 1999 eng d00aComputerized Adaptive Testing: Overview and Introduction0 aComputerized Adaptive Testing Overview and Introduction a187-940 v233 aUse of computerized adaptive testing (CAT) has increased substantially since it was first formulated in the 1970s. This paper provides an overview of CAT and introduces the contributions to this Special Issue. The elements of CAT discussed here include item selection procedures, estimation of the latent trait, item exposure, measurement precision, and item bank development. Some topics for future research are also presented. 10acomputerized adaptive testing1 aMeijer, R R1 aNering, M L uhttp://www.iacat.org/content/computerized-adaptive-testing-overview-and-introduction00426nas a2200121 4500008004100000245006100041210006000102300001200162490000700174100001600181700001600197856009100213 1999 eng d00aComputerized adaptive testing: Overview and introduction0 aComputerized adaptive testing Overview and introduction a187-1940 v231 aMeijer, R R1 aNering, M L uhttp://www.iacat.org/content/computerized-adaptive-testing-overview-and-introduction-000427nas a2200121 4500008004100000245005900041210005700100300001200157490000700169100002700176700001600203856008600219 1999 eng d00aCUSUM-based person-fit statistics for adaptive testing0 aCUSUMbased personfit statistics for adaptive testing a199-2180 v261 aKrimpen-Stoop, E M L A1 aMeijer, R R uhttp://www.iacat.org/content/cusum-based-person-fit-statistics-adaptive-testing-000596nas a2200109 4500008004100000245008300041210006900124260014400193100002700337700001600364856010600380 1999 eng d00aCUSUM-based person-fit statistics for adaptive testing (Research Report 99-05)0 aCUSUMbased personfit statistics for adaptive testing Research Re aEnschede, The Netherlands: University of Twente, Faculty of Educational Science and Technology, Department of Measurement and Data Analysis1 aKrimpen-Stoop, E M L A1 aMeijer, R R uhttp://www.iacat.org/content/cusum-based-person-fit-statistics-adaptive-testing-research-report-99-0500399nas a2200121 4500008004100000245005500041210005500096300001200151490000700163100001700170700001300187856007700200 1999 eng d00aDetecting item memorization in the CAT environment0 aDetecting item memorization in the CAT environment a147-1600 v231 aMcLeod L. D.1 aLewis, C uhttp://www.iacat.org/content/detecting-item-memorization-cat-environment00427nas a2200109 4500008004100000245006800041210006800109260002100177100001600198700001800214856008500232 1999 eng d00aDetecting items that have been memorized in the CAT environment0 aDetecting items that have been memorized in the CAT environment aMontreal, Canada1 aMcLeod, L D1 aSchinpke, D L uhttp://www.iacat.org/content/detecting-items-have-been-memorized-cat-environment00563nas a2200097 4500008004100000245009700041210006900138260012200207100001500329856012100344 1999 eng d00aDevelopment and introduction of a computer adaptive Graduate Record Examination General Test0 aDevelopment and introduction of a computer adaptive Graduate Rec aF. Drasgow and J .B. Olson-Buchanan (Eds.). Innovations in computerized assessment (pp. 117-135). Mahwah NJ: Erlbaum.1 aMills, C N uhttp://www.iacat.org/content/development-and-introduction-computer-adaptive-graduate-record-examination-general-test00594nas a2200109 4500008004100000245011100041210006900152260010500221100001600326700001600342856012600358 1999 eng d00aDevelopment of the computerized adaptive testing version of the Armed Services Vocational Aptitude Battery0 aDevelopment of the computerized adaptive testing version of the aF. Drasgow and J. Olson-Buchanan (Eds.). Innovations in computerized assessment. Mahwah NJ: Erlbaum.1 aSegall, D O1 aMoreno, K E uhttp://www.iacat.org/content/development-computerized-adaptive-testing-version-armed-services-vocational-aptitude-battery00754nas a2200145 4500008004100000245005500041210005500096300001100151490000700162520028800169653003400457100001600491700001700507856008400524 1999 eng d00aGraphical models and computerized adaptive testing0 aGraphical models and computerized adaptive testing a223-370 v233 aConsiders computerized adaptive testing from the perspective of graphical modeling (GM). GM provides methods for making inferences about multifaceted skills and knowledge and for extracting data from complex performances. Provides examples from language-proficiency assessment. (SLD)10acomputerized adaptive testing1 aAlmond, R G1 aMislevy, R J uhttp://www.iacat.org/content/graphical-models-and-computerized-adaptive-testing00488nas a2200121 4500008004100000245008700041210006900128300001200197490000700209100002700216700001600243856010700259 1999 eng d00aThe null distribution of person-fit statistics for conventional and adaptive tests0 anull distribution of personfit statistics for conventional and a a327-3450 v231 aKrimpen-Stoop, E M L A1 aMeijer, R R uhttp://www.iacat.org/content/null-distribution-person-fit-statistics-conventional-and-adaptive-tests-000367nas a2200109 4500008004100000245004800041210004800089260002000137100001400157700001300171856007300184 1999 eng d00aPrinciples for administering adaptive tests0 aPrinciples for administering adaptive tests aMontreal Canada1 aMiller, T1 aDavey, T uhttp://www.iacat.org/content/principles-administering-adaptive-tests00412nas a2200109 4500008004100000245006700041210006500108260001700173100001600190700001300206856008300219 1998 eng d00aA Bayesian approach to detection of item preknowledge in a CAT0 aBayesian approach to detection of item preknowledge in a CAT aSan Diego CA1 aMcLeod, L D1 aLewis, C uhttp://www.iacat.org/content/bayesian-approach-detection-item-preknowledge-cat00557nas a2200157 4500008004100000245007600041210006900117260001400186100001600200700001600216700002000232700001500252700001400267700001800281856010000299 1998 eng d00aComputerized adaptive rating scales that measure contextual performance0 aComputerized adaptive rating scales that measure contextual perf aDallas TX1 aBorman, W C1 aHanson, M A1 aMontowidlo, S J1 aDrasgow, F1 aFoster, L1 aKubisiak, U C uhttp://www.iacat.org/content/computerized-adaptive-rating-scales-measure-contextual-performance00484nas a2200109 4500008004100000245007800041210006900119260004600188100001700234700001900251856010400270 1998 eng d00aDoes adaptive testing violate local independence? (Research Report 98-33)0 aDoes adaptive testing violate local independence Research Report aPrinceton NJ: Educational Testing Service1 aMislevy, R J1 aChang, Hua-Hua uhttp://www.iacat.org/content/does-adaptive-testing-violate-local-independence-research-report-98-3300553nas a2200097 4500008004100000245008000041210006900121260014400190100001700334856010400351 1998 eng d00aInnovations in computer-based ability testing: Promise, problems and perils0 aInnovations in computerbased ability testing Promise problems an aIn Hakel, M.D. (Ed.) Beyond multiple choice: Alternatives to traditional testing for selection. Hillsdale, NJ: Lawrence Erlbaum Associates.1 aMcBride, J R uhttp://www.iacat.org/content/innovations-computer-based-ability-testing-promise-problems-and-perils00395nas a2200121 4500008004100000245005100041210005100092300001200143490000700155100001100162700002000173856008000193 1998 eng d00aMeasuring change conventionally and adaptively0 aMeasuring change conventionally and adaptively a882-8970 v581 aMay, K1 aNicewander, W A uhttp://www.iacat.org/content/measuring-change-conventionally-and-adaptively00459nas a2200109 4500008004100000245009100041210006900132260001500201100001600216700001300232856010400245 1998 eng d00aA new approach for the detection of item preknowledge in computerized adaptive testing0 anew approach for the detection of item preknowledge in computeri aUrbana, IL1 aMcLeod, L D1 aLewis, C uhttp://www.iacat.org/content/new-approach-detection-item-preknowledge-computerized-adaptive-testing00644nas a2200109 4500008004100000245011100041210006900152260014400221100002700365700001600392856012600408 1998 eng d00aPerson fit based on statistical process control in an adaptive testing environment (Research Report 98-13)0 aPerson fit based on statistical process control in an adaptive t aEnschede, The Netherlands: University of Twente, Faculty of Educational Science and Technology, Department of Measurement and Data Analysis1 aKrimpen-Stoop, E M L A1 aMeijer, R R uhttp://www.iacat.org/content/person-fit-based-statistical-process-control-adaptive-testing-environment-research-report-9800654nas a2200109 4500008004100000245012200041210006900163260014400232100001600376700002700392856012500419 1998 eng d00aSimulating the null distribution of person-fit statistics for conventional and adaptive tests (Research Report 98-02)0 aSimulating the null distribution of personfit statistics for con aEnschede, The Netherlands: University of Twente, Faculty of Educational Science and Technology, Department of Measurement and Data Analysis1 aMeijer, R R1 aKrimpen-Stoop, E M L A uhttp://www.iacat.org/content/simulating-null-distribution-person-fit-statistics-conventional-and-adaptive-tests-research00650nas a2200121 4500008004100000245009700041210006900138260014500207100001600352700001600368700002700384856011700411 1998 eng d00aStatistical tests for person misfit in computerized adaptive testing (Research Report 98-01)0 aStatistical tests for person misfit in computerized adaptive tes aEnschede, The Netherlands : University of Twente, Faculty of Educational Science and Technology, Department of Measurement and Data Analysis1 aGlas, C A W1 aMeijer, R R1 aKrimpen-Stoop, E M L A uhttp://www.iacat.org/content/statistical-tests-person-misfit-computerized-adaptive-testing-research-report-98-0100596nas a2200145 4500008004100000020001000041245007300051210006900124260010000193300000700293100001600300700001600316700002300332856009500355 1998 eng d a98-0100aStatistical tests for person misfit in computerized adaptive testing0 aStatistical tests for person misfit in computerized adaptive tes aEnschede, The NetherlandsbFaculty of Educational Science and Technology, Univeersity of Twente a281 aGlas, C A W1 aMeijer, R R1 aKrimpen-Stoop, E M uhttp://www.iacat.org/content/statistical-tests-person-misfit-computerized-adaptive-testing00523nas a2200121 4500008004100000245012000041210006900161300001200230490000600242100001600248700001700264856012000281 1998 eng d00aSwedish Enlistment Battery: Construct validity and latent variable estimation of cognitive abilities by the CAT-SEB0 aSwedish Enlistment Battery Construct validity and latent variabl a107-1140 v61 aMardberg, B1 aCarlstedt, B uhttp://www.iacat.org/content/swedish-enlistment-battery-construct-validity-and-latent-variable-estimation-cognitive00584nas a2200121 4500008004100000245014200041210006900183260004700252100001700299700001400316700001400330856011800344 1998 eng d00aThree response types for broadening the conception of mathematical problem solving in computerized-adaptive tests (Research Report 98-45)0 aThree response types for broadening the conception of mathematic aPrinceton NJ : Educational Testing Service1 aBennett, R E1 aMorley, M1 aQuardt, D uhttp://www.iacat.org/content/three-response-types-broadening-conception-mathematical-problem-solving-computerized00414nas a2200109 4500008004100000245003600041210003500077260009900112100001700211700001600228856006000244 1997 eng d00aCAST 5 for Windows users' guide0 aCAST 5 for Windows users guide aContract No. "MDA903-93-D-0032, DO 0054. Alexandria, VA: Human Resources Research Organization1 aMcBride, J R1 aCooper, R R uhttp://www.iacat.org/content/cast-5-windows-users-guide00535nas a2200121 4500008004100000245004000041210003900081260017700120100001400297700001600311700001700327856006900344 1997 eng d00aCAT-ASVAB cost and benefit analyses0 aCATASVAB cost and benefit analyses aW. A. Sands, B. K. Waters, and J. R. McBride (Eds.), Computer adaptive testing: From inquiry to operation (pp. 227-236). Washington, DC: American Psychological Association.1 aWise, L L1 aCurran, L T1 aMcBride, J R uhttp://www.iacat.org/content/cat-asvab-cost-and-benefit-analyses00500nas a2200097 4500008004100000245004600041210004500087260017900132100001600311856007500327 1997 eng d00aCAT-ASVAB operational test and evaluation0 aCATASVAB operational test and evaluation aW. A. Sands, B. K. Waters, and . R. McBride (Eds.), Computerized adaptive testing: From inquiry to operation (pp. 199-205). Washington DC: American Psychological Association.1 aMoreno, K E uhttp://www.iacat.org/content/cat-asvab-operational-test-and-evaluation01686nam a2200145 4500008004100000245006100041210006000102260006200162520115300224653003401377100001501411700001601426700001701442856008101459 1997 eng d00aComputerized adaptive testing: From inquiry to operation0 aComputerized adaptive testing From inquiry to operation aWashington, D.C., USAbAmerican Psychological Association3 a(from the cover) This book traces the development of computerized adaptive testing (CAT) from its origins in the 1960s to its integration with the Armed Services Vocational Aptitude Battery (ASVAB) in the 1990s. A paper-and-pencil version of the battery (P&P-ASVAB) has been used by the Defense Department since the 1970s to measure the abilities of applicants for military service. The test scores are used both for initial qualification and for classification into entry-level training opportunities. /// This volume provides the developmental history of the CAT-ASVAB through its various stages in the Joint-Service arena. Although the majority of the book concerns the myriad technical issues that were identified and resolved, information is provided on various political and funding support challenges that were successfully overcome in developing, testing, and implementing the battery into one of the nation's largest testing programs. The book provides useful information to professionals in the testing community and everyone interested in personnel assessment and evaluation. (PsycINFO Database Record (c) 2004 APA, all rights reserved).10acomputerized adaptive testing1 aSands, W A1 aWaters, B K1 aMcBride, J R uhttp://www.iacat.org/content/computerized-adaptive-testing-inquiry-operation02024nas a2200289 4500008004100000245012700041210006900168300001300237490000800250520105800258653003401316653003301350653002301383653001501406653001001421653002601431653001001457653001501467653001801482653003101500100001501531700001601546700001401562700001701576700001601593856012501609 1997 eng d00aA computerized adaptive testing system for speech discrimination measurement: The Speech Sound Pattern Discrimination Test0 acomputerized adaptive testing system for speech discrimination m a2289-2980 v1013 aA computerized, adaptive test-delivery system for the measurement of speech discrimination, the Speech Sound Pattern Discrimination Test, is described and evaluated. Using a modified discrimination task, the testing system draws on a pool of 130 items spanning a broad range of difficulty to estimate an examinee's location along an underlying continuum of speech processing ability, yet does not require the examinee to possess a high level of English language proficiency. The system is driven by a mathematical measurement model which selects only test items which are appropriate in difficulty level for a given examinee, thereby individualizing the testing experience. Test items were administered to a sample of young deaf adults, and the adaptive testing system evaluated in terms of respondents' sensory and perceptual capabilities, acoustic and phonetic dimensions of speech, and theories of speech perception. Data obtained in this study support the validity, reliability, and efficiency of this test as a measure of speech processing ability.10a*Diagnosis, Computer-Assisted10a*Speech Discrimination Tests10a*Speech Perception10aAdolescent10aAdult10aAudiometry, Pure-Tone10aHuman10aMiddle Age10aPsychometrics10aReproducibility of Results1 aBochner, J1 aGarrison, W1 aPalmer, L1 aMacKenzie, D1 aBraveman, A uhttp://www.iacat.org/content/computerized-adaptive-testing-system-speech-discrimination-measurement-speech-sound-pattern00468nas a2200133 4500008004100000245006100041210006100102260002400163100001700187700001500204700001300219700001400232856008800246 1997 eng d00aComputerized adaptive testing through the World Wide Web0 aComputerized adaptive testing through the World Wide Web a(ERIC No. ED414536)1 aShermis, M D1 aMzumara, H1 aBrown, M1 aLillig, C uhttp://www.iacat.org/content/computerized-adaptive-testing-through-world-wide-web-000500nas a2200121 4500008004100000245008900041210006900130260001500199100001700214700001500231700001500246856011700261 1997 eng d00aControlling test and computer anxiety: Test performance under CAT and SAT conditions0 aControlling test and computer anxiety Test performance under CAT aChicago IL1 aShermis, M D1 aMzumara, H1 aBublitz, S uhttp://www.iacat.org/content/controlling-test-and-computer-anxiety-test-performance-under-cat-and-sat-conditions00493nas a2200109 4500008004100000245003400041210003400075260017900109100001600288700001600304856006300320 1997 eng d00aCurrent and future challenges0 aCurrent and future challenges aW. A. Sands, B. K. Waters, and J. R. McBride (Eds.). Computerized adaptive testing: From inquiry to operation (pp 257-269). Washington DC: American Psychological Association.1 aSegall, D O1 aMoreno, K E uhttp://www.iacat.org/content/current-and-future-challenges00475nas a2200133 4500008004100000245007300041210006900114300001000183490000700193100001300200700001100213700001800224856009900242 1997 eng d00aDeveloping and scoring an innovative computerized writing assessment0 aDeveloping and scoring an innovative computerized writing assess a21-410 v341 aDavey, T1 aGodwin1 aMittelholz, D uhttp://www.iacat.org/content/developing-and-scoring-innovative-computerized-writing-assessment00578nas a2200133 4500008003900000245013100039210006900170100001700239700001500256700001700271700001400288700001700302856012500319 1997 d00aEvaluating an automatically scorable, open-ended response type for measuring mathematical reasoning in computer-adaptive tests0 aEvaluating an automatically scorable openended response type for1 aBennett, R E1 aSteffen, M1 aSingley, M K1 aMorley, M1 aJacquemin, D uhttp://www.iacat.org/content/evaluating-automatically-scorable-open-ended-response-type-measuring-mathematical-reasoning00922nas a2200145 4500008004100000245003900041210003800080260006100118300001200179520047000191100001600661700001700677700001700694856006500711 1997 eng d00aItem exposure control in CAT-ASVAB0 aItem exposure control in CATASVAB aWashington D.C., USAbAmerican Psychological Association a141-1443 aDescribes the method used to control item exposure in computerized adaptive testing-Armed Services Vocational Aptitude Battery (CAT-ASVAB). The method described was developed specifically to ensure that CAT-ASVAB items were expose no more often than the items in the printers ASVAB's alternate forms, ensuring that CAT ASVAB is nor more vulnerable than printed ASVAB forms to comprise from item exposure. (PsycINFO Database Record (c) 2010 APA, all rights reserved)1 aHetter, R D1 aSympson, J B1 aMcBride, J R uhttp://www.iacat.org/content/item-exposure-control-cat-asvab00543nas a2200121 4500008004100000245004100041210004100082260018000123100001600303700001600319700001600335856007000351 1997 eng d00aItem pool development and evaluation0 aItem pool development and evaluation aW. A. Sands, B. K. Waters, and J. R. McBride (Eds.), Computerized adaptive testing: From inquiry to operation (pp. 117-130). Washington DC: American Psychological Association.1 aSegall, D O1 aMoreno, K E1 aHetter, D H uhttp://www.iacat.org/content/item-pool-development-and-evaluation00630nas a2200109 4500008004100000245011500041210006900156260013700225100001700362700001600379856012500395 1997 eng d00aModification of the Computerized Adaptive Screening Test (CAST) for use by recruiters in all military services0 aModification of the Computerized Adaptive Screening Test CAST fo aFinal Technical Report FR-WATSD-97-24, Contract No. MDA903-93-D-0032, DO 0054. Alexandria VA: Human Resources Research Organization.1 aMcBride, J R1 aCooper, R R uhttp://www.iacat.org/content/modification-computerized-adaptive-screening-test-cast-use-recruiters-all-military-services00514nas a2200109 4500008004100000245004600041210004600087260016400133100001600297700001600313856007500329 1997 eng d00aPolicy and program management perspective0 aPolicy and program management perspective aW.A. Sands, B.K. Waters, and J.R. McBride (Eds.), Computerized adaptive testing: From inquiry to operation. Washington, DC: American Psychological Association.1 aMartin, C J1 aHoshaw, C R uhttp://www.iacat.org/content/policy-and-program-management-perspective00664nas a2200121 4500008004100000245009200041210006900133260017800202100001700380700001600397700001600413856011300429 1997 eng d00aPreliminary psychometric research for CAT-ASVAB: Selecting an adaptive testing strategy0 aPreliminary psychometric research for CATASVAB Selecting an adap aW. A. Sands, B. K. Waters, and J. R. McBride (Eds.), Computerized adaptive testing: From inquiry to operation (pp. 83-95). Washington DC: American Psychological Association.1 aMcBride, J R1 aWetzel, C D1 aHetter, R D uhttp://www.iacat.org/content/preliminary-psychometric-research-cat-asvab-selecting-adaptive-testing-strategy00613nas a2200133 4500008004100000245005600041210005500097260018200152100001600334700001600350700001600366700001600382856008100398 1997 eng d00aPsychometric procedures for administering CAT-ASVAB0 aPsychometric procedures for administering CATASVAB aW. A. Sands, B. K. Waters, and J. R. McBride (Eds.), Computerized adaptive testing: From inquiry to operation (pp. 131-140). Washington D.C.: American Psychological Association.1 aSegall, D O1 aMoreno, K E1 aBloxom, B M1 aHetter, R D uhttp://www.iacat.org/content/psychometric-procedures-administering-cat-asvab00544nas a2200109 4500008004100000245005200041210005100093260018000144100001600324700001600340856007800356 1997 eng d00aReliability and construct validity of CAT-ASVAB0 aReliability and construct validity of CATASVAB aW. A. Sands, B. K. Waters, and J. R. McBride (Eds.). Computerized adaptive testing: From inquiry to operation (pp. 169-179). Washington DC: American Psychological Association.1 aMoreno, K E1 aSegall, O D uhttp://www.iacat.org/content/reliability-and-construct-validity-cat-asvab01814nas a2200169 4500008004100000245005300041210005300094250001000147260006000157300001000217520125400227653003401481100001701515700001601532700001701548856007901565 1997 eng d00aResearch antecedents of applied adaptive testing0 aResearch antecedents of applied adaptive testing axviii aWashington D.C. USAbAmerican Psychological Association a47-573 a(from the chapter) This chapter sets the stage for the entire computerized adaptive testing Armed Services Vocational Aptitude Battery (CAT-ASVAB) development program by describing the state of the art immediately preceding its inception. By the mid-l970s, a great deal of research had been conducted that provided the technical underpinnings needed to develop adaptive tests, but little research had been done to corroborate empirically the promising results of theoretical analyses and computer simulation studies. In this chapter, the author summarizes much of the important theoretical and simulation research prior to 1977. In doing so, he describes a variety of approaches to adaptive testing, and shows that while many methods for adaptive testing had been proposed, few practical attempts had been made to implement it. Furthermore, the few instances of adaptive testing were based primarily on traditional test theory, and were developed in laboratory settings for purposes of basic research. The most promising approaches, those based on item response theory and evaluated analytically or by means of computer simulations, remained to be proven in the crucible of live testing. (PsycINFO Database Record (c) 2004 APA, all rights reserved).10acomputerized adaptive testing1 aMcBride, J R1 aWaters, B K1 aMcBride, J R uhttp://www.iacat.org/content/research-antecedents-applied-adaptive-testing00442nas a2200097 4500008004100000245002600041210002600067260017900093100001700272856005500289 1997 eng d00aTechnical perspective0 aTechnical perspective aW. A. Sands, B. K. Waters, and J. R. McBride (Eds.), Computerized adaptive testing: From inquiry to operation (pp. 29-44). Washington, DC: American Psychological Association.1 aMcBride, J R uhttp://www.iacat.org/content/technical-perspective00617nas a2200145 4500008004100000245005200041210005100093260016700144100001600311700001600327700002100343700001600364700001700380856007400397 1997 eng d00aValidation of the experimental CAT-ASVAB system0 aValidation of the experimental CATASVAB system aW. A. Sands, B. K. Waters, and J. R. McBride (Eds.), Computerized adaptive testing: From inquiry to operation. Washington, DC: American Psychological Association.1 aSegall, D O1 aMoreno, K E1 aKieckhaefer, W F1 aVicino, F L1 aMcBride, J R uhttp://www.iacat.org/content/validation-experimental-cat-asvab-system00450nas a2200085 4500008004100000245011100041210006900152100001700221856012600238 1996 eng d00aCurrent research in computer-based testing for personnel selection and classification in the United States0 aCurrent research in computerbased testing for personnel selectio1 aMcBride, J R uhttp://www.iacat.org/content/current-research-computer-based-testing-personnel-selection-and-classification-united-states01528nas a2200229 4500008004100000020004100041245010500082210006900187250001500256260001200271300000900283490000700292520069800299653002500997653002501022653004301047653003701090653001101127100001601138700001801154856012601172 1996 eng d a0363-3624 (Print)0363-3624 (Linking)00aMethodologic trends in the healthcare professions: computer adaptive and computer simulation testing0 aMethodologic trends in the healthcare professions computer adapt a1996/07/01 cJul-Aug a13-40 v213 aAssessing knowledge and performance on computer is rapidly becoming a common phenomenon in testing and measurement. Computer adaptive testing presents an individualized test format in accordance with the examinee's ability level. The efficiency of the testing process enables a more precise estimate of performance, often with fewer items than traditional paper-and-pencil testing methodologies. Computer simulation testing involves performance-based, or authentic, assessment of the examinee's clinical decision-making abilities. The authors discuss the trends in assessing performance through computerized means and the application of these methodologies to community-based nursing practice.10a*Clinical Competence10a*Computer Simulation10aComputer-Assisted Instruction/*methods10aEducational Measurement/*methods10aHumans1 aForker, J E1 aMcDonald, M E uhttp://www.iacat.org/content/methodologic-trends-healthcare-professions-computer-adaptive-and-computer-simulation-testing00550nas a2200109 4500008004100000245013000041210006900171260004600240100001700286700001300303856012400316 1996 eng d00aMissing responses and IRT ability estimation: Omits, choice, time limits, and adaptive testing (Research Report RR-96-30-ONR)0 aMissing responses and IRT ability estimation Omits choice time l aPrinceton NJ: Educational Testing Service1 aMislevy, R J1 aWu, P -K uhttp://www.iacat.org/content/missing-responses-and-irt-ability-estimation-omits-choice-time-limits-and-adaptive-testing00399nas a2200109 4500008004100000245006100041210006000102260001300162100001600175700001300191856008500204 1996 eng d00aPerson-fit indices and their role in the CAT environment0 aPersonfit indices and their role in the CAT environment aNew York1 aMcLeod, L D1 aLewis, C uhttp://www.iacat.org/content/person-fit-indices-and-their-role-cat-environment-000432nas a2200121 4500008004100000245006600041210006500107300001200172490000600184100001000190700001800200856009200218 1996 eng d00aPractical issues in large-scale computerized adaptive testing0 aPractical issues in largescale computerized adaptive testing a287-3040 v91 aMills1 aStocking, M L uhttp://www.iacat.org/content/practical-issues-large-scale-computerized-adaptive-testing00570nas a2200121 4500008004100000245009500041210006900136260008100205100001500286700001200301700001700313856011800330 1996 eng d00aPreliminary cost-effectiveness analysis of alternative ASVAB testing concepts at MET sites0 aPreliminary costeffectiveness analysis of alternative ASVAB test aInterim report to Defense Manpower Data Center. Fairfax, VA: Lewin-VHI, Inc.1 aHogan, P F1 aDall, T1 aMcBride, J R uhttp://www.iacat.org/content/preliminary-cost-effectiveness-analysis-alternative-asvab-testing-concepts-met-sites00542nas a2200121 4500008004100000245011400041210006900155260002100224100001500245700001800260700001800278856012400296 1995 eng d00aA comparison of classification agreement between adaptive and full-length test under the 1-PL and 2-PL models0 acomparison of classification agreement between adaptive and full aSan Francisco CA1 aLewis, M J1 aSubhiyah, R G1 aMorrison, C A uhttp://www.iacat.org/content/comparison-classification-agreement-between-adaptive-and-full-length-test-under-1-pl-and-200535nas a2200109 4500008004100000245013700041210006900178260001800247100001800265700001900283856012300302 1995 eng d00aComputer adaptive testing in a medical licensure setting: A comparison of outcomes under the one- and two- parameter logistic models0 aComputer adaptive testing in a medical licensure setting A compa aSan Francisco1 aMorrison, C A1 aNungester, R J uhttp://www.iacat.org/content/computer-adaptive-testing-medical-licensure-setting-comparison-outcomes-under-one-and-two00414nas a2200097 4500008004100000245008200041210006900123260000700192100001700199856010000216 1995 eng d00aEquating the computerized adaptive edition of the Differential Aptitude Tests0 aEquating the computerized adaptive edition of the Differential A aCA1 aMcBride, J R uhttp://www.iacat.org/content/equating-computerized-adaptive-edition-differential-aptitude-tests00643nas a2200121 4500008004100000245013600041210006900177260010500246100001500351700001700366700001600383856012200399 1995 eng d00aAn evaluation of alternative concepts for administering the Armed Services Vocational Aptitude Battery to applicants for enlistment0 aevaluation of alternative concepts for administering the Armed S aDMDC Technical Report 95-013. Monterey, CA: Personnel Testing Division, Defense Manpower Data Center1 aHogan, P F1 aMcBride, J R1 aCurran, L T uhttp://www.iacat.org/content/evaluation-alternative-concepts-administering-armed-services-vocational-aptitude-battery00477nas a2200109 4500008004100000245009500041210006900136260002100205100001100226700001600237856011400253 1995 eng d00aThe influence of examinee test-taking behavior motivation in computerized adaptive testing0 ainfluence of examinee testtaking behavior motivation in computer aSan Francisco CA1 aKim, J1 aMcLean, J E uhttp://www.iacat.org/content/influence-examinee-test-taking-behavior-motivation-computerized-adaptive-testing00648nas a2200133 4500008004100000245017200041210006900213260004600282100001900328700001700347700001500364700001300379856012200392 1995 eng d00aThe introduction and comparability of the computer-adaptive GRE General Test (GRE Board Professional Report 88-08ap; Educational Testing Service Research Report 95-20)0 aintroduction and comparability of the computeradaptive GRE Gener aPrinceton NJ: Educational Testing Service1 aSchaeffer, G A1 aSteffen, M L1 aMills, C N1 aDurso, R uhttp://www.iacat.org/content/introduction-and-comparability-computer-adaptive-gre-general-test-gre-board-professional00506nas a2200121 4500008004100000245009100041210006900132260002100201100001600222700001300238700001700251856011600268 1995 eng d00aItem exposure rates for unconstrained and content-balanced computerized adaptive tests0 aItem exposure rates for unconstrained and contentbalanced comput aSan Francisco CA1 aMorrison, C1 aSubhiyah1 aNungester, R uhttp://www.iacat.org/content/item-exposure-rates-unconstrained-and-content-balanced-computerized-adaptive-tests00529nas a2200109 4500008004100000245010200041210006900143260004800212100001500260700001800275856012600293 1995 eng d00aPractical issues in large-scale high-stakes computerized adaptive testing (Research Report 95-23)0 aPractical issues in largescale highstakes computerized adaptive aPrinceton, NJ: Educational Testing Service.1 aMills, C N1 aStocking, M L uhttp://www.iacat.org/content/practical-issues-large-scale-high-stakes-computerized-adaptive-testing-research-report-95-2300550nas a2200133 4500008004100000245012000041210006900161300001400230490000700244100001600251700001500267700001800282856011600300 1995 eng d00aTheoretical results and item selection from multidimensional item bank in the Mokken IRT model for polytomous items0 aTheoretical results and item selection from multidimensional ite a337–3520 v191 aHemker, B T1 aSijtsma, K1 aMolenaar, I W uhttp://www.iacat.org/content/theoretical-results-and-item-selection-multidimensional-item-bank-mokken-irt-model00355nas a2200097 4500008004100000245005700041210005600098260000700154100001700161856007900178 1994 eng d00aEarly psychometric research in the CAT-ASVAB Project0 aEarly psychometric research in the CATASVAB Project aCA1 aMcBride, J R uhttp://www.iacat.org/content/early-psychometric-research-cat-asvab-project00501nas a2200121 4500008004100000245009700041210006900138300001400207490000700221653003400228100001200262856010500274 1993 eng d00aAn application of Computerized Adaptive Testing to the Test of English as a Foreign Language0 aapplication of Computerized Adaptive Testing to the Test of Engl a4257-42580 v5310acomputerized adaptive testing1 aMoon, O uhttp://www.iacat.org/content/application-computerized-adaptive-testing-test-english-foreign-language00645nas a2200145 4500008004100000245014000041210006900181260004700250100001900297700001500316700001500331700001800346700001500364856012000379 1993 eng d00aField test of a computer-based GRE general test (GRE Board Technical Report 88-8; Educational Testing Service Research Rep No RR 93-07)0 aField test of a computerbased GRE general test GRE Board Technic aPrinceton NJ: Educational Testing Service.1 aSchaeffer, G A1 aReese, C M1 aSteffen, M1 aMcKinley, R L1 aMills, C N uhttp://www.iacat.org/content/field-test-computer-based-gre-general-test-gre-board-technical-report-88-8-educational00418nas a2200121 4500008004100000245006300041210005900104300001000163490000700173100001800180700001600198856008200214 1992 eng d00aThe application of latent class models in adaptive testing0 aapplication of latent class models in adaptive testing a71-880 v571 aMacready, G B1 aDayton, C M uhttp://www.iacat.org/content/application-latent-class-models-adaptive-testing00306nas a2200121 4500008004100000245002400041210002300065300001000088490000600098100001100104700001600115856005300131 1992 eng d00aCAT-ASVAB precision0 aCATASVAB precision a22-260 v11 aMoreno1 aSegall, D O uhttp://www.iacat.org/content/cat-asvab-precision00516nas a2200109 4500008004100000245005600041210005200097260014600149100001700295700001600312856007800328 1992 eng d00aThe development of alternative operational concepts0 adevelopment of alternative operational concepts aProceedings of the 34th Annual Conference of the Military Testing Association. San Diego, CA: Navy Personnel Research and Development Center.1 aMcBride, J R1 aCurran, L T uhttp://www.iacat.org/content/development-alternative-operational-concepts00508nas a2200109 4500008004100000245005100041210005100092260014600143100001700289700001500306856007700321 1992 eng d00aEvaluation of alternative operational concepts0 aEvaluation of alternative operational concepts aProceedings of the 34th Annual Conference of the Military Testing Association. San Diego, CA: Navy Personnel Research and Development Center.1 aMcBride, J R1 aHogan, P F uhttp://www.iacat.org/content/evaluation-alternative-operational-concepts00472nas a2200097 4500008004100000245011200041210006900153260002100222100001600243856011500259 1992 eng d00aPractical considerations for conducting studies of differential item functioning (DIF) in a CAT environment0 aPractical considerations for conducting studies of differential aSan Francisco CA1 aMiller, T R uhttp://www.iacat.org/content/practical-considerations-conducting-studies-differential-item-functioning-dif-cat00738nas a2200169 4500008004100000020000900041245012400050210006900174260008700243653003400330653001500364653001800379100001600397700001900413700001700432856011900449 1991 eng d aR-1100aPatterns of alcohol and drug use among federal offenders as assessed by the Computerized Lifestyle Screening Instrument0 aPatterns of alcohol and drug use among federal offenders as asse aOttawa, ON. CanadabResearch and Statistics Branch, Correctional Service of Canada10acomputerized adaptive testing10adrug abuse10asubstance use1 aRobinson, D1 aPorporino, F J1 aMillson, W A uhttp://www.iacat.org/content/patterns-alcohol-and-drug-use-among-federal-offenders-assessed-computerized-lifestyle00505nas a2200097 4500008004100000245008400041210006900125260008800194100001700282856010800299 1991 eng d00aWhat lies ahead? Computer technology and its implications for personnel testing0 aWhat lies ahead Computer technology and its implications for per aNATO Workshop on Computer-based Assessment of Military Personnel, Brussels, Belgium1 aMcBride, J R uhttp://www.iacat.org/content/what-lies-ahead-computer-technology-and-its-implications-personnel-testing00491nas a2200109 4500008004100000245009500041210006900136260002700205100001600232700001700248856011600265 1990 eng d00aA comparison of Rasch and three-parameter logistic models in computerized adaptive testing0 acomparison of Rasch and threeparameter logistic models in comput aUnpublished manuscript1 aParker, S B1 aMcBride, J R uhttp://www.iacat.org/content/comparison-rasch-and-three-parameter-logistic-models-computerized-adaptive-testing00515nam a2200169 4500008003900000245005100039210004700090260002600137100001100163700001600174700001600190700001400206700001700220700001700237700001500254856007600269 1990 d00aComputerized adaptive testing: A primer (Eds.)0 aComputerized adaptive testing A primer Eds aHillsdale NJ: Erlbaum1 aWainer1 aDorans, N J1 aFlaugher, R1 aGreen, BF1 aMislevy, R J1 aSteinberg, L1 aThissen, D uhttp://www.iacat.org/content/computerized-adaptive-testing-primer-eds-200483nas a2200157 4500008004100000245002200041210002200063260009900085100001100184700001600195700001400211700001700225700001700242700001500259856005100274 1990 eng d00aFuture challenges0 aFuture challenges aH. Wainer (Ed.), Computerized adaptive testing: A primer (pp. 233-272). Hillsdale NJ: Erlbaum.1 aWainer1 aDorans, N J1 aGreen, BF1 aMislevy, R J1 aSteinberg, L1 aThissen, D uhttp://www.iacat.org/content/future-challenges00515nas a2200109 4500008004100000245007100041210006900112260009800181100001100279700001700290856009800307 1990 eng d00aItem response theory, item calibration, and proficiency estimation0 aItem response theory item calibration and proficiency estimation aH. Wainer (Ed.), Computerized adaptive testing: A primer (pp. 65-102). Hillsdale NJ: Erlbaum.1 aWainer1 aMislevy, R J uhttp://www.iacat.org/content/item-response-theory-item-calibration-and-proficiency-estimation00378nas a2200109 4500008004100000245002300041210002300064260009900087100001100186700001700197856005400214 1990 eng d00aTesting algorithms0 aTesting algorithms aH. Wainer (Ed.), Computerized adaptive testing: A primer (pp. 103-135). Hillsdale NJ: Erlbaum.1 aWainer1 aMislevy, R J uhttp://www.iacat.org/content/testing-algorithms-000380nas a2200109 4500008004100000245002300041210002300064260009900087100001500186700001700201856005200218 1990 eng d00aTesting algorithms0 aTesting algorithms aH. Wainer (Ed.), Computerized adaptive testing: A primer (pp. 103-135). Hillsdale NJ: Erlbaum.1 aThissen, D1 aMislevy, R J uhttp://www.iacat.org/content/testing-algorithms00547nas a2200145 4500008004500000245009400045210006900139300001200208490000700220100001500227700001500242700001700257700001400274856011300288 1989 Engldsh 00aAdaptive and Conventional Versions of the DAT: The First Complete Test Battery Comparison0 aAdaptive and Conventional Versions of the DAT The First Complete a363-3710 v131 aHenly, S J1 aKlebe, K J1 aMcBride, J R1 aCudeck, R uhttp://www.iacat.org/content/adaptive-and-conventional-versions-dat-first-complete-test-battery-comparison-000541nas a2200145 4500008004100000245009400041210006900135300001200204490000700216100001500223700001500238700001700253700001400270856011100284 1989 eng d00aAdaptive and conventional versions of the DAT: The first complete test battery comparison0 aAdaptive and conventional versions of the DAT The first complete a363-3710 v131 aHenly, S J1 aKlebe, K J1 aMcBride, J R1 aCudeck, R uhttp://www.iacat.org/content/adaptive-and-conventional-versions-dat-first-complete-test-battery-comparison00386nas a2200097 4500008004100000245006100041210006100102260002100163100001700184856008700201 1989 eng d00aCommercial applications of computerized adaptive testing0 aCommercial applications of computerized adaptive testing aSan Antonio, TX1 aMcBride, J R uhttp://www.iacat.org/content/commercial-applications-computerized-adaptive-testing01130nas a2200157 4500008004100000245010500041210006900146300001200215490000600227520055900233100001500792700001600807700001500823700001000838856012400848 1989 eng d00aComparisons of paper-administered, computer-administered and computerized adaptive achievement tests0 aComparisons of paperadministered computeradministered and comput a311-3260 v53 aThis research study was designed to compare student achievement scores from three different testing methods: paper-administered testing, computer-administered testing, and computerized adaptive testing. The three testing formats were developed from the California Assessment Program (CAP) item banks for grades three and six. The paper-administered and the computer-administered tests were identical in item content, format, and sequence. The computerized adaptive test was a tailored or adaptive sequence of the items in the computer-administered test. 1 aOlson, J B1 aMaynes, D D1 aSlawson, D1 aHo, K uhttp://www.iacat.org/content/comparisons-paper-administered-computer-administered-and-computerized-adaptive-achievement00365nas a2200097 4500008004100000245005500041210005300096260001900149100001700168856008200185 1989 eng d00aA computerized adaptive mathematics screening test0 acomputerized adaptive mathematics screening test aBurlingame, CA1 aMcBride, J R uhttp://www.iacat.org/content/computerized-adaptive-mathematics-screening-test00482nas a2200133 4500008004100000245007600041210006900117300001200186490000700198100001500205700001700220700001600237856009500253 1989 eng d00aTrace lines for testlets: A use of multiple-categorical-response models0 aTrace lines for testlets A use of multiplecategoricalresponse mo a247-2600 v261 aThissen, D1 aSteinberg, L1 aMooney, J A uhttp://www.iacat.org/content/trace-lines-testlets-use-multiple-categorical-response-models00402nas a2200097 4500008004100000245007100041210006900112260001500181100001700196856009100213 1988 eng d00aA computerized adaptive version of the Differential Aptitude Tests0 acomputerized adaptive version of the Differential Aptitude Tests aAtlanta GA1 aMcBride, J R uhttp://www.iacat.org/content/computerized-adaptive-version-differential-aptitude-tests00557nas a2200109 4500008004100000245013000041210006900171260005400240100001400294700001600308856012300324 1988 eng d00aThe equivalence of scores from automated and conventional educational and psychological tests (College Board Report No. 88-8)0 aequivalence of scores from automated and conventional educationa aNew York: The College Entrance Examination Board.1 aMazzeo, J1 aHarvey, A L uhttp://www.iacat.org/content/equivalence-scores-automated-and-conventional-educational-and-psychological-tests-college00456nas a2200133 4500008004100000245006600041210006600107300001200173490000700185100000900192700001400201700002200215856008500237 1988 eng d00aItem pool maintenance in the presence of item parameter drift0 aItem pool maintenance in the presence of item parameter drift a275-2850 v251 aBock1 aMuraki, E1 aPfeiffenberger, W uhttp://www.iacat.org/content/item-pool-maintenance-presence-item-parameter-drift00622nas a2200145 4500008004100000245011100041210006900152260005400221100001400275700001700289700001400306700001600320700001700336856012300353 1988 eng d00aRefinement of the Computerized Adaptive Screening Test (CAST) (Final Report, Contract No MDA203 06-C-0373)0 aRefinement of the Computerized Adaptive Screening Test CAST Fina aWashington, DC: American Institutes for Research.1 aWise, L L1 aMcHenry, J J1 aChia, W J1 aSzenas, P L1 aMcBride, J R uhttp://www.iacat.org/content/refinement-computerized-adaptive-screening-test-cast-final-report-contract-no-mda203-06-c00487nas a2200097 4500008004100000245011900041210006900160260002200229100001700251856012100268 1987 eng d00aComputerized adaptive testing made practical: The Computerized Adaptive Edition of the Differential Aptitude Tests0 aComputerized adaptive testing made practical The Computerized Ad aSan Francisco, CA1 aMcBride, J R uhttp://www.iacat.org/content/computerized-adaptive-testing-made-practical-computerized-adaptive-edition-differential00473nas a2200121 4500008004100000245008200041210006900123260001300192100001700205700001500222700001200237856010200249 1987 eng d00aEquating the computerized adaptive edition of the Differential Aptitude Tests0 aEquating the computerized adaptive edition of the Differential A aNew York1 aMcBride, J R1 aCorpe, V A1 aWing, H uhttp://www.iacat.org/content/equating-computerized-adaptive-edition-differential-aptitude-tests-000566nas a2200133 4500008004100000245012100041210006900162260002100231100001500252700001600267700001500283700001000298856012400308 1986 eng d00aComparison and equating of paper-administered, computer-administered, and computerized adaptive tests of achievement0 aComparison and equating of paperadministered computeradministere aSan Francisco CA1 aOlsen, J B1 aMaynes, D D1 aSlawson, D1 aHo, K uhttp://www.iacat.org/content/comparison-and-equating-paper-administered-computer-administered-and-computerized-adaptive00404nas a2200109 4500008004100000245005900041210005800100260002100158100001700179700001300196856008500209 1986 eng d00aComputerized adaptive achievement testing: A prototype0 aComputerized adaptive achievement testing A prototype aSan Francisco CA1 aMcBride, J R1 aMoe, K C uhttp://www.iacat.org/content/computerized-adaptive-achievement-testing-prototype00405nas a2200097 4500008004100000245007100041210006900112260001800181100001700199856009100216 1986 eng d00aA computerized adaptive edition of the Differential Aptitude Tests0 acomputerized adaptive edition of the Differential Aptitude Tests aWashington DC1 aMcBride, J R uhttp://www.iacat.org/content/computerized-adaptive-edition-differential-aptitude-tests00405nas a2200097 4500008004100000245007100041210006900112260001600181100001700197856009300214 1986 eng d00aA computerized adaptive edition of the Differential Aptitude Tests0 acomputerized adaptive edition of the Differential Aptitude Tests aBoulder, CO1 aMcBride, J R uhttp://www.iacat.org/content/computerized-adaptive-edition-differential-aptitude-tests-000623nas a2200133 4500008004100000245012600041210006900167260006700236100001900303700001400322700001600336700001500352856012200367 1985 eng d00aArmed Services Vocational Aptitude Battery: Development of an adaptive item pool (AFHLR-TR-85-19; Technical Rep No 85-19)0 aArmed Services Vocational Aptitude Battery Development of an ada aBrooks Air Force Base TX: Air Force Human Resources Laboratory1 aPrestwood, J S1 aVale, C D1 aMassey, R H1 aWelsh, J R uhttp://www.iacat.org/content/armed-services-vocational-aptitude-battery-development-adaptive-item-pool-afhlr-tr-85-1900316nas a2200109 4500008004100000245003400041210003400075300001000109490000700119100001700126856006300143 1985 eng d00aComputerized adaptive testing0 aComputerized adaptive testing a25-280 v431 aMcBride, J R uhttp://www.iacat.org/content/computerized-adaptive-testing00379nas a2200097 4500008004100000245006200041210006100103260001600164100001700180856008400197 1985 eng d00aComputerized adaptive testing: An overview and an example0 aComputerized adaptive testing An overview and an example aBoulder, CO1 aMcBride, J R uhttp://www.iacat.org/content/computerized-adaptive-testing-overview-and-example00484nas a2200109 4500008004100000245005900041210005900100260010100159100001600260700001700276856008100293 1985 eng d00aReducing the predictability of adaptive item sequences0 aReducing the predictability of adaptive item sequences aProceedings of the 27th Annual Conference of the Military Testing Association, San Diego, 43-48.1 aWetzel, C D1 aMcBride, J R uhttp://www.iacat.org/content/reducing-predictability-adaptive-item-sequences00535nas a2200109 4500008004100000245007200041210006900113260011600182300001200298100001800310856009700328 1985 eng d00aUnidimensional and multidimensional models for item response theory0 aUnidimensional and multidimensional models for item response the aMinneapolis, MN. USAbUniversity of Minnesota, Department of Psychology, Psychometrics Methods Programc06/1982 a127-1481 aMcDonald, R P uhttp://www.iacat.org/content/unidimensional-and-multidimensional-models-item-response-theory00383nas a2200097 4500008004100000245006400041210006300105100001700168700001600185856008400201 1985 eng d00aValidity of adaptive testing: A summary of research results0 aValidity of adaptive testing A summary of research results1 aSympson, J B1 aMoreno, K E uhttp://www.iacat.org/content/validity-adaptive-testing-summary-research-results00510nas a2200109 4500008004100000245011600041210006900157100001600226700001600242700002100258856012100279 1985 eng d00aA validity study of the computerized adaptive testing version of the Armed Services Vocational Aptitude Battery0 avalidity study of the computerized adaptive testing version of t1 aMoreno, K E1 aSegall, D O1 aKieckhaefer, W F uhttp://www.iacat.org/content/validity-study-computerized-adaptive-testing-version-armed-services-vocational-aptitude00406nas a2200121 4500008004500000245005400045210005400099300001200153490000600165100001400171700001700185856008200202 1984 Engldsh 00aBias and Information of Bayesian Adaptive Testing0 aBias and Information of Bayesian Adaptive Testing a273-2850 v81 aWeiss, DJ1 aMcBride, J R uhttp://www.iacat.org/content/bias-and-information-bayesian-adaptive-testing-000400nas a2200121 4500008004100000245005400041210005400095300001200149490000600161100001400167700001700181856008000198 1984 eng d00aBias and information of Bayesian adaptive testing0 aBias and information of Bayesian adaptive testing a273-2850 v81 aWeiss, DJ1 aMcBride, J R uhttp://www.iacat.org/content/bias-and-information-bayesian-adaptive-testing00352nas a2200121 4500008003900000245003600039210003600075300001200111490000700123100001800130700001700148856006500165 1984 d00aComputerized diagnostic testing0 aComputerized diagnostic testing a391-3970 v211 aMCArthur, D L1 aChoppin, B H uhttp://www.iacat.org/content/computerized-diagnostic-testing00427nas a2200097 4500008004100000245008500041210006900126260002000195100001700215856009700232 1984 eng d00aThe design of a computerized adaptive testing system for administering the ASVAB0 adesign of a computerized adaptive testing system for administeri aNew Orleans, LA1 aMcBride, J R uhttp://www.iacat.org/content/design-computerized-adaptive-testing-system-administering-asvab00529nas a2200133 4500008004100000245006100041210006100102260009000163100001700253700001400270700001700284700001400301856008000315 1984 eng d00aEvaluation of computerized adaptive testing of the ASVAB0 aEvaluation of computerized adaptive testing of the ASVAB aSan Diego, CA: Navy Personnel Research and Development Center, unpublished manuscript1 aHardwicke, S1 aVicino, F1 aMcBride, J R1 aNemeth, C uhttp://www.iacat.org/content/evaluation-computerized-adaptive-testing-asvab00318nas a2200121 4500008004100000245002700041210002700068300001200095490000600107100001500113700001500128856005300143 1984 eng d00aIssues in item banking0 aIssues in item banking a315-3300 v11 aMillman, J1 aArter, J A uhttp://www.iacat.org/content/issues-item-banking01570nas a2200169 4500008004100000245013900041210006900180300001200249490000600261520091500267653003401182100001601216700001601232700001701248700001401265856012101279 1984 eng d00aRelationship between corresponding Armed Services Vocational Aptitude Battery (ASVAB) and computerized adaptive testing (CAT) subtests0 aRelationship between corresponding Armed Services Vocational Apt a155-1630 v83 aInvestigated the relationships between selected subtests from the Armed Services Vocational Aptitude Battery (ASVAB) and corresponding subtests administered as computerized adaptive tests (CATs), using 270 17-26 yr old Marine recruits as Ss. Ss were administered the ASVAB before enlisting and approximately 2 wks after entering active duty, and the CAT tests were administered to Ss approximately 24 hrs after arriving at the recruit depot. Results indicate that 3 adaptive subtests correlated as well with ASVAB as did the 2nd administration of the ASVAB, although CAT subtests contained only half the number of items. Factor analysis showed CAT subtests to load on the same factors as the corresponding ASVAB subtests, indicating that the same abilities were being measured. It is concluded that CAT can achieve the same measurement precision as a conventional test, with half the number of items. (16 ref) 10acomputerized adaptive testing1 aMoreno, K E1 aWetzel, C D1 aMcBride, J R1 aWeiss, DJ uhttp://www.iacat.org/content/relationship-between-corresponding-armed-services-vocational-aptitude-battery-asvab-and00603nas a2200145 4500008004500000245013900045210006900184300001200253490000600265100001600271700001600287700001700303700001400320856012300334 1984 Engldsh 00aRelationship Between Corresponding Armed Services Vocational Aptitude Battery (ASVAB) and Computerized Adaptive Testing (CAT) Subtests0 aRelationship Between Corresponding Armed Services Vocational Apt a155-1630 v81 aMoreno, K E1 aWetzel, C D1 aMcBride, J R1 aWeiss, DJ uhttp://www.iacat.org/content/relationship-between-corresponding-armed-services-vocational-aptitude-battery-asvab-and-100419nas a2200109 4500008004100000245007300041210006900114300001000183490000900193100001500202856009200217 1984 eng d00aUsing microcomputers to administer tests: An alternate point of view0 aUsing microcomputers to administer tests An alternate point of v a20-210 v3(2)1 aMillman, J uhttp://www.iacat.org/content/using-microcomputers-administer-tests-alternate-point-view00538nas a2200109 4500008004100000245007700041210006900118260010900187100001400296700001700310856010100327 1983 eng d00aBias and information of Bayesian adaptive testing (Research Report 83-2)0 aBias and information of Bayesian adaptive testing Research Repor aMinneapolis: University of Minnesota, Department of Psychology, Computerized Adaptive Testing Laboratory1 aWeiss, DJ1 aMcBride, J R uhttp://www.iacat.org/content/bias-and-information-bayesian-adaptive-testing-research-report-83-200526nam a2200097 4500008004100000245011300041210006900154260006300223100001700286856012500303 1983 eng d00aEffects of item parameter error and other factors on trait estimation in latent trait based adaptive testing0 aEffects of item parameter error and other factors on trait estim aUnpublished doctoral dissertation, University of Minnesota1 aMattson, J D uhttp://www.iacat.org/content/effects-item-parameter-error-and-other-factors-trait-estimation-latent-trait-based-adaptive00571nas a2200109 4500008004100000245013900041210006900180260005100249100001800300700001700318856012600335 1983 eng d00aAn evaluation of one- and three-parameter logistic tailored testing procedures for use with small item pools (Research Report ONR83-1)0 aevaluation of one and threeparameter logistic tailored testing p aIowa City IA: American College Testing Program1 aMcKinley, R L1 aReckase, M D uhttp://www.iacat.org/content/evaluation-one-and-three-parameter-logistic-tailored-testing-procedures-use-small-item-pools00547nas a2200109 4500008004100000245010400041210006900145260006600214100001600280700001700296856012400313 1983 eng d00aInfluence of fallible item parameters on test information during adaptive testing (Tech Rep 83-15).0 aInfluence of fallible item parameters on test information during aSan Diego CA: Navy Personnel Research and Development Center.1 aWetzel, C D1 aMcBride, J R uhttp://www.iacat.org/content/influence-fallible-item-parameters-test-information-during-adaptive-testing-tech-rep-83-1500645nas a2200133 4500008004100000245015000041210006900191260006500260100001600325700001600341700001700357700001400374856012300388 1983 eng d00aRelationship between corresponding Armed Services Vocational Aptitude Battery (ASVAB) and computerized adaptive testing (CAT) subtests (TR 83-27)0 aRelationship between corresponding Armed Services Vocational Apt aSan Diego CA: Navy Personnel Research and Development Center1 aMoreno, K E1 aWetzel, D C1 aMcBride, J R1 aWeiss, DJ uhttp://www.iacat.org/content/relationship-between-corresponding-armed-services-vocational-aptitude-battery-asvab-and-000620nas a2200121 4500008004100000245011100041210006900152260010500221100001700326700001600343700001400359856012500373 1983 eng d00aReliability and validity of adaptive ability tests in a military recruit population (Research Report 83-1)0 aReliability and validity of adaptive ability tests in a military aMinneapolis: Department of Psychology, Psychometric Methods Program, Computerized Testing Laboratory1 aMcBride, J R1 aMartin, J T1 aWeiss, DJ uhttp://www.iacat.org/content/reliability-and-validity-adaptive-ability-tests-military-recruit-population-research-report00576nas a2200109 4500008004100000245007700041210006900118260014800187100001700335700001600352856009800368 1983 eng d00aReliability and validity of adaptive ability tests in a military setting0 aReliability and validity of adaptive ability tests in a military aD. J. Weiss (Ed.), New horizons in testing: Latent trait test theory and computerized adaptive testing (pp. 224-236). New York: Academic Press.1 aMcBride, J R1 aMartin, J T uhttp://www.iacat.org/content/reliability-and-validity-adaptive-ability-tests-military-setting00668nas a2200121 4500008004100000245012300041210006900164260014000233100001600373700001700389700001400406856012600420 1983 eng d00aReliability and validity of adaptive vs. conventional tests in a military recruit population (Research Rep. No. 83-1).0 aReliability and validity of adaptive vs conventional tests in a aMinneapolis: University of Minnesota, Department of Psychology, Psychometric Methods Program, Computerized Adaptive Testing Laboratory.1 aMartin, J T1 aMcBride, J R1 aWeiss, DJ uhttp://www.iacat.org/content/reliability-and-validity-adaptive-vs-conventional-tests-military-recruit-population-research00437nas a2200121 4500008004100000245007000041210006900111300001200180490000600192100000900198700001700207856009100224 1982 eng d00aAdaptive EAP estimation of ability in a microcomputer environment0 aAdaptive EAP estimation of ability in a microcomputer environmen a431-4440 v61 aBock1 aMislevy, R J uhttp://www.iacat.org/content/adaptive-eap-estimation-ability-microcomputer-environment00509nas a2200097 4500008004100000245008900041210006900130260008000199100001700279856011500296 1982 eng d00aComputerized adaptive testing project: Objectives and requirements (Tech Note 82-22)0 aComputerized adaptive testing project Objectives and requirement aSan Diego CA: Navy Personnel Research and Development Center. (AD A118 447)1 aMcBride, J R uhttp://www.iacat.org/content/computerized-adaptive-testing-project-objectives-and-requirements-tech-note-82-2200631nas a2200097 4500008004100000245007700041210006900118260022400187100001700411856010500428 1982 eng d00aComputerized Adaptive Testing system development and project management.0 aComputerized Adaptive Testing system development and project man aMinutes of the ASVAB (Armed Services Vocational Aptitude Battery) Steering Committee. Washington, DC: Office of the Assistant Secretary of Defense (Manpower, Reserve Affairs and Logistics), Accession Policy Directorate.1 aMcBride, J R uhttp://www.iacat.org/content/computerized-adaptive-testing-system-development-and-project-management00593nas a2200109 4500008004100000245006500041210006100106260019200167100001700359700001700376856009000393 1982 eng d00aThe computerized adaptive testing system development project0 acomputerized adaptive testing system development project aD. J. Weiss (Ed.), Proceedings of the 1982 Item Response Theory and Computerized Adaptive Testing Conference (pp. 342-349). Minneapolis: University of Minnesota, Department of Psychology.1 aMcBride, J R1 aSympson, J B uhttp://www.iacat.org/content/computerized-adaptive-testing-system-development-project00446nas a2200097 4500008004100000245009100041210006900132260001900201100001700220856011100237 1982 eng d00aDevelopment of a computerized adaptive testing system for enlisted personnel selection0 aDevelopment of a computerized adaptive testing system for enlist aWashington, DC1 aMcBride, J R uhttp://www.iacat.org/content/development-computerized-adaptive-testing-system-enlisted-personnel-selection00552nas a2200109 4500008004100000245008300041210006900124260011600193100001300309700001700322856010300339 1981 eng d00aA comparison of a Bayesian and a maximum likelihood tailored testing procedure0 acomparison of a Bayesian and a maximum likelihood tailored testi aColumbia MObUniversity of Missouri, Department of Educational Psychology, Tailored Testing Research Laboratory1 aMcKinley1 aReckase, M D uhttp://www.iacat.org/content/comparison-bayesian-and-maximum-likelihood-tailored-testing-procedure00647nas a2200109 4500008004100000245013300041210006900174260013900243100001800382700001400400856012300414 1981 eng d00aFactors influencing the psychometric characteristics of an adaptive testing strategy for test batteries (Research Rep. No. 81-4)0 aFactors influencing the psychometric characteristics of an adapt aMinneapolis: University of Minnesota, Department of Psychology, Psychometric Methods Program, Computerized Adaptive Testing Laboratory1 aMaurelli, V A1 aWeiss, DJ uhttp://www.iacat.org/content/factors-influencing-psychometric-characteristics-adaptive-testing-strategy-test-batteries00593nas a2200097 4500008004100000245005800041210005800099260023900157100001700396856008200413 1980 eng d00aAdaptive verbal ability testing in a military setting0 aAdaptive verbal ability testing in a military setting aD. J. Weiss (Ed.), Proceedings of the 1979 Computerized Adaptive Testing Conference (pp. 4-15). Minneapolis MN: University of Minnesota, Department of Psychology, Psychometric Methods Program, Computerized Adaptive Testing Laboratory.1 aMcBride, J R uhttp://www.iacat.org/content/adaptive-verbal-ability-testing-military-setting00373nas a2200121 4500008004100000245004500041210004500086300001200131490000700143100001300150700001700163856007100180 1980 eng d00aComputer applications to ability testing0 aComputer applications to ability testing a193-2030 v131 aMcKinley1 aReckase, M D uhttp://www.iacat.org/content/computer-applications-ability-testing00477nas a2200133 4500008004500000245007200045210006900117300001200186490000600198100001400204700001900218700001300237856009300250 1980 Engldsh 00aImplied Orders Tailored Testing: Simulation with the Stanford-Binet0 aImplied Orders Tailored Testing Simulation with the StanfordBine a157-1630 v41 aCudeck, R1 aMcCormick, D J1 aCliff, N uhttp://www.iacat.org/content/implied-orders-tailored-testing-simulation-stanford-binet-000471nas a2200133 4500008004100000245007200041210006900113300001200182490000600194100001400200700001700214700001500231856009100246 1980 eng d00aImplied orders tailored testing: Simulation with the Stanford-Binet0 aImplied orders tailored testing Simulation with the StanfordBine a157-1630 v41 aCudeck, R1 aMcCormick, D1 aCliff, N A uhttp://www.iacat.org/content/implied-orders-tailored-testing-simulation-stanford-binet00591nas a2200109 4500008004100000245010700041210006900148260010300217100001800320700001700338856012600355 1980 eng d00aA successful application of latent trait theory to tailored achievement testing (Research Report 80-1)0 asuccessful application of latent trait theory to tailored achiev aUniversity of Missouri, Department of Educational Psychology, Tailored Testing Research Laboratory1 aMcKinley, R L1 aReckase, M D uhttp://www.iacat.org/content/successful-application-latent-trait-theory-tailored-achievement-testing-research-report-80-100476nas a2200097 4500008004100000245007300041210006900114260008800183100001700271856009000288 1979 eng d00aAdaptive mental testing: The state of the art (Technical Report 423)0 aAdaptive mental testing The state of the art Technical Report 42 aAlexandria VA: U.S. Army Research Institute for the Behavioral and Social Sciences.1 aMcBride, J R uhttp://www.iacat.org/content/adaptive-mental-testing-state-art-technical-report-423-000551nas a2200097 4500008004100000245006400041210006300105260018000168100001700348856008800365 1979 eng d00aAdaptive tests' usefulness for military personnel screening0 aAdaptive tests usefulness for military personnel screening aIn M. Wiskoff, Chair, Military Applications of Computerized Adaptive Testing. Symposium presented at the Annual Convention of the American Psychological Association, New York.1 aMcBride, J R uhttp://www.iacat.org/content/adaptive-tests-usefulness-military-personnel-screening00495nas a2200097 4500008004100000245008300041210006900124260008900193100001700282856009800299 1979 eng d00aComputerized adaptive testing: The state of the art (ARI Technical Report 423)0 aComputerized adaptive testing The state of the art ARI Technical aAlexandria, VA: U.S. Army Research Institute for the Behavioral and Social Sciences.1 aMcBride, J R uhttp://www.iacat.org/content/computerized-adaptive-testing-state-art-ari-technical-report-42300463nas a2200097 4500008004100000245005100041210004800092260013400140100001700274856007400291 1979 eng d00aAn evaluation of computerized adaptive testing0 aevaluation of computerized adaptive testing aIn Proceedings of the 21st Military Testing Association Conference. SanDiego, CA: Navy Personnel Research and Development Center.1 aMcBride, J R uhttp://www.iacat.org/content/evaluation-computerized-adaptive-testing00498nas a2200133 4500008004500000245008600045210006900131300001200200490000600212100001300218700001400231700001900245856010000264 1979 Engldsh 00aEvaluation of Implied Orders as a Basis for Tailored Testing with Simulation Data0 aEvaluation of Implied Orders as a Basis for Tailored Testing wit a495-5140 v31 aCliff, N1 aCudeck, R1 aMcCormick, D J uhttp://www.iacat.org/content/evaluation-implied-orders-basis-tailored-testing-simulation-data-000466nas a2200121 4500008004100000245008600041210006900127300001200196490000600208100001500214700001700229856009800246 1979 eng d00aEvaluation of implied orders as a basis for tailored testing with simulation data0 aEvaluation of implied orders as a basis for tailored testing wit a495-5140 v31 aCliff, N A1 aMcCormick, D uhttp://www.iacat.org/content/evaluation-implied-orders-basis-tailored-testing-simulation-data00479nas a2200133 4500008004100000245007700041210006900118300001000187490000600197100001400203700001900217700001500236856009400251 1979 eng d00aMonte carlo evaluation of implied orders as a basis for tailored testing0 aMonte carlo evaluation of implied orders as a basis for tailored a65-740 v31 aCudeck, R1 aMcCormick, D J1 aCliff, N A uhttp://www.iacat.org/content/monte-carlo-evaluation-implied-orders-basis-tailored-testing00481nas a2200133 4500008004500000245007700045210006900122300001000191490000600201100001400207700001700221700001300238856009600251 1979 Engldsh 00aMonte Carlo Evaluation of Implied Orders As a Basis for Tailored Testing0 aMonte Carlo Evaluation of Implied Orders As a Basis for Tailored a65-740 v31 aCudeck, R1 aMcCormick, D1 aCliff, N uhttp://www.iacat.org/content/monte-carlo-evaluation-implied-orders-basis-tailored-testing-000372nas a2200097 4500008004100000245005900041210005400100260002200154100001700176856008100193 1978 eng d00aAn adaptive test designed for paper-and-pencil testing0 aadaptive test designed for paperandpencil testing aSan Francisco, CA1 aMcBride, J R uhttp://www.iacat.org/content/adaptive-test-designed-paper-and-pencil-testing00522nas a2200097 4500008004100000245007200041210006900113260013000182100001700312856009500329 1978 eng d00aApplications of latent trait theory to criterion-referenced testing0 aApplications of latent trait theory to criterionreferenced testi aD.J. Weiss (Ed.), Proceedings of the 1977 Computerized Adaptive Testing Conference. Minneapolis, MN: University of Minnesota.1 aMcBride, J R uhttp://www.iacat.org/content/applications-latent-trait-theory-criterion-referenced-testing00510nam a2200097 4500008004100000245009200041210006900133260008100202100001800283856011100301 1978 eng d00aA comparison of Bayesian and maximum likelihood scoring in a simulated stradaptive test0 acomparison of Bayesian and maximum likelihood scoring in a simul aUnpublished Masters thesis, St. Mary’s University of Texas, San Antonio TX1 aMaurelli, V A uhttp://www.iacat.org/content/comparison-bayesian-and-maximum-likelihood-scoring-simulated-stradaptive-test00591nas a2200121 4500008004100000245010900041210006900150260008100219100001500300700001400315700001700329856012300346 1978 eng d00aEvaluations of implied orders as a basis for tailored testing using simulations (Technical Report No. 4)0 aEvaluations of implied orders as a basis for tailored testing us aLos Angeles CA: University of Southern California, Department of Psychology.1 aCliff, N A1 aCudeck, R1 aMcCormick, D uhttp://www.iacat.org/content/evaluations-implied-orders-basis-tailored-testing-using-simulations-technical-report-no-400528nas a2200121 4500008004100000245007600041210006900117260008100186100001500267700001400282700001700296856009300313 1978 eng d00aImplied orders as a basis for tailored testing (Technical Report No. 6)0 aImplied orders as a basis for tailored testing Technical Report aLos Angeles CA: University of Southern California, Department of Psychology.1 aCliff, N A1 aCudeck, R1 aMcCormick, D uhttp://www.iacat.org/content/implied-orders-basis-tailored-testing-technical-report-no-600407nas a2200097 4500008004100000245004500041210004200086260009600128100001700224856006800241 1977 eng d00aAn adaptive test of arithmetic reasoning0 aadaptive test of arithmetic reasoning athe Proceedings of the Nineteenth Military Testing Association conference, San Antonio, TX.1 aMcBride, J R uhttp://www.iacat.org/content/adaptive-test-arithmetic-reasoning00478nas a2200097 4500008004100000245004100041210003900082260017700121100001700298856006500315 1977 eng d00aA brief overview of adaptive testing0 abrief overview of adaptive testing aD. J. Weiss (Ed.), Applications of computerized testing (Research Report 77-1). Minneapolis: University of Minnesota, Department of Psychology, Psychometric Methods Program1 aMcBride, J R uhttp://www.iacat.org/content/brief-overview-adaptive-testing00525nas a2200097 4500008004100000245005900041210005900100260016300159100001700322856008800339 1977 eng d00aComputerized Adaptive Testing research and development0 aComputerized Adaptive Testing research and development aH. Taylor, Proceedings of the Second Training and Personnel Technology Conference. Washington, DC: Office of the Director of Defense Research and Engineering.1 aMcBride, J R uhttp://www.iacat.org/content/computerized-adaptive-testing-research-and-development00634nas a2200121 4500008004100000245007800041210006900119260018600188100001500374700001400389700001700403856009200420 1977 eng d00aAn empirical evaluation of implied orders as a basis for tailored testing0 aempirical evaluation of implied orders as a basis for tailored t aD. J. Weiss (Ed.), Proceedings of the 1977 Computerized Adaptive Testing Conference. Minneapolis MN: University of Minnesota, Department of Psychology, Psychometric Methods Program.1 aCliff, N A1 aCudeck, R1 aMcCormick, D uhttp://www.iacat.org/content/empirical-evaluation-implied-orders-basis-tailored-testing00570nas a2200097 4500008003900000245007100039210006900110260018500179100001800364856009000382 1977 d00aImplementation of Tailored Testing at the Civil Service Commission0 aImplementation of Tailored Testing at the Civil Service Commissi aD. J. Weiss (Ed.), Proceedings of the 1977 Computerized Adaptive Testing Conference. Minneapolis MN: University of Minnesota, Department of Psychology, Psychometric Methods Program1 aMcKillip, R H uhttp://www.iacat.org/content/implementation-tailored-testing-civil-service-commission00414nas a2200109 4500008004100000245006800041210006800109300001200177490000600189100001700195856009200212 1977 eng d00aSome properties of a Bayesian adaptive ability testing strategy0 aSome properties of a Bayesian adaptive ability testing strategy a121-1400 v11 aMcBride, J R uhttp://www.iacat.org/content/some-properties-bayesian-adaptive-ability-testing-strategy00416nas a2200109 4500008004100000245006800041210006800109300001200177490000600189100001700195856009400212 1977 En d00aSome Properties of a Bayesian Adaptive Ability Testing Strategy0 aSome Properties of a Bayesian Adaptive Ability Testing Strategy a121-1400 v11 aMcBride, J R uhttp://www.iacat.org/content/some-properties-bayesian-adaptive-ability-testing-strategy-000464nas a2200121 4500008004100000245008000041210006900121300001200190490000700202100001700209700001500226856010100241 1977 eng d00aTAILOR-APL: An interactive computer program for individual tailored testing0 aTAILORAPL An interactive computer program for individual tailore a771-7740 v371 aMcCormick, D1 aCliff, N A uhttp://www.iacat.org/content/tailor-apl-interactive-computer-program-individual-tailored-testing00474nas a2200097 4500008004100000245007300041210006900114260008800183100001700271856008800288 1976 eng d00aAdaptive mental testing: The state of the art (Technical Report 423)0 aAdaptive mental testing The state of the art Technical Report 42 aWashington DC: U.S. Army Research Institute for the Social and Behavioral Sciences.1 aMcBride, J R uhttp://www.iacat.org/content/adaptive-mental-testing-state-art-technical-report-42300617nas a2200097 4500008004100000245011800041210006900159260015300228100001700381856012100398 1976 eng d00aAdaptive testing research at Minnesota: Some properties of a Bayesian sequential adaptive mental testing strategy0 aAdaptive testing research at Minnesota Some properties of a Baye aC. K. Clark (Ed.), Proceedings of the First Conference on Computerized Adaptive Testing (pp. 36-53). Washington DC: U.S. Government Printing Office.1 aMcBride, J R uhttp://www.iacat.org/content/adaptive-testing-research-minnesota-some-properties-bayesian-sequential-adaptive-mental00457nas a2200097 4500008004100000245004400041210004200085260014400127100001700271856007100288 1976 eng d00aBandwidth, fidelity, and adaptive tests0 aBandwidth fidelity and adaptive tests aT. J. McConnell, Jr. (Ed.), CAT/C 2 1975: The second conference on computer-assisted test construction. Atlanta GA: Atlanta Public Schools.1 aMcBride, J R uhttp://www.iacat.org/content/bandwidth-fidelity-and-adaptive-tests00576nas a2200109 4500008004100000245007700041210006900118260015300187100001800340700001400358856009400372 1976 eng d00aComputer-assisted testing: An orderly transition from theory to practice0 aComputerassisted testing An orderly transition from theory to pr aC. K. Clark (Ed.), Proceedings of the First Conference on Computerized Adaptive Testing (pp. 95-96). Washington DC: U.S. Government Printing Office.1 aMcKillip, R H1 aUrry, V W uhttp://www.iacat.org/content/computer-assisted-testing-orderly-transition-theory-practice00587nas a2200133 4500008004100000245009400041210006900135260007200204100001600276700001500292700001800307700001900325856010900344 1976 eng d00aMonte carlo results from a computer program for tailored testing (Technical Report No. 2)0 aMonte carlo results from a computer program for tailored testing aLos Angeles CA: University of California, Department of Psychology.1 aCudeck, R A1 aCliff, N A1 aReynolds, T J1 aMcCormick, D J uhttp://www.iacat.org/content/monte-carlo-results-computer-program-tailored-testing-technical-report-no-200435nas a2200097 4500008004100000245007100041210006900112260005200181100001700233856008700250 1976 eng d00aResearch on adaptive testing 1973-1976: A review of the literature0 aResearch on adaptive testing 19731976 A review of the literature aUnpublished manuscript, University of Minnesota1 aMcBride, J R uhttp://www.iacat.org/content/research-adaptive-testing-1973-1976-review-literature00465nam a2200097 4500008004100000245006900041210006800110260008000178100001700258856009200275 1976 eng d00aSimulation studies of adaptive testing: A comparative evaluation0 aSimulation studies of adaptive testing A comparative evaluation aUnpublished doctoral dissertation, University of Minnesota, Minneapolis, MN1 aMcBride, J R uhttp://www.iacat.org/content/simulation-studies-adaptive-testing-comparative-evaluation00542nas a2200109 4500008004100000245009100041210006900132260008700201100001700288700001400305856011300319 1976 eng d00aSome properties of a Bayesian adaptive ability testing strategy (Research Report 76-1)0 aSome properties of a Bayesian adaptive ability testing strategy aMinneapolis MN: Department of Psychology, Computerized Adaptive Testing Laboratory1 aMcBride, J R1 aWeiss, DJ uhttp://www.iacat.org/content/some-properties-bayesian-adaptive-ability-testing-strategy-research-report-76-100485nas a2200097 4500008004100000245002700041210002700068260021900095100001700314856005600331 1975 eng d00aScoring adaptive tests0 aScoring adaptive tests aD. J. Weiss (Ed.), Computerized adaptive trait measurement: Problems and Prospects (Research Report 75-5), pp. 17-25. Minneapolis MN: University of Minnesota, Department of Psychology, Psychometric Methods Program.1 aMcBride, J R uhttp://www.iacat.org/content/scoring-adaptive-tests00585nas a2200109 4500008004100000245006900041210006900110260017300179100001700352700001400369856009200383 1974 eng d00aRecent and projected developments in ability testing by computer0 aRecent and projected developments in ability testing by computer aEarl Jones (Ed.), Symposium Proceedings: Occupational Research and the Navy–Prospectus 1980 (TR-74-14). San Diego, CA: Navy Personnel Research and Development Center.1 aMcBride, J R1 aWeiss, DJ uhttp://www.iacat.org/content/recent-and-projected-developments-ability-testing-computer00533nas a2200109 4500008004100000245008700041210006900128260008700197100001700284700001400301856010800315 1974 eng d00aA word knowledge item pool for adaptive ability measurement (Research Report 74-2)0 aword knowledge item pool for adaptive ability measurement Resear aMinneapolis MN: Department of Psychology, Computerized Adaptive Testing Laboratory1 aMcBride, J R1 aWeiss, DJ uhttp://www.iacat.org/content/word-knowledge-item-pool-adaptive-ability-measurement-research-report-74-200406nam a2200097 4500008004100000245005600041210005200097260006100149100001600210856008200226 1972 eng d00aA modification to Lord’s model for tailored tests0 amodification to Lord s model for tailored tests aUnpublished doctoral dissertation, University of Toronto1 aMussio, J J uhttp://www.iacat.org/content/modification-lord%E2%80%99s-model-tailored-tests00451nas a2200121 4500008004100000245004800041210004800089260007000137100001600207700001600223700001800239856007200257 1962 eng d00aExploratory study of a sequential item test0 aExploratory study of a sequential item test aU.S. Army Personnel Research Office, Technical Research Note 129.1 aSeeley, L C1 aMorton, M A1 aAnderson, A A uhttp://www.iacat.org/content/exploratory-study-sequential-item-test00478nas a2200109 4500008004100000245010500041210006900146300001200215490000700227100001600234856011800250 1950 eng d00aSome empirical aspects of the sequential analysis technique as applied to an achievement examination0 aSome empirical aspects of the sequential analysis technique as a a195-2070 v181 aMoonan, W J uhttp://www.iacat.org/content/some-empirical-aspects-sequential-analysis-technique-applied-achievement-examination