TY - JOUR T1 - The Influence of Computerized Adaptive Testing on Psychometric Theory and Practice JF - Journal of Computerized Adaptive Testing Y1 - 2024 A1 - Reckase, Mark D. KW - computerized adaptive testing KW - Item Response Theory KW - paradigm shift KW - scaling theory KW - test design AB -

The major premise of this article is that part of the stimulus for the evolution of psychometric theory since the 1950s was the introduction of the concept of computerized adaptive testing (CAT) or its earlier non-CAT variations. The conceptual underpinnings of CAT that had the most influence on psychometric theory was the shift of emphasis from the test (or test score) as the focus of analysis to the test item (or item score). The change in focus allowed a change in the way that test results are conceived of as measurements. It also resolved the conflict among a number of ideas that were present in the early work on psychometric theory. Some of the conflicting ideas are summarized below to show how work on the development of CAT resolved some of those conflicts.

 

VL - 11 UR - https://jcatpub.net/index.php/jcat/issue/view/34/9 IS - 1 ER - TY - JOUR T1 - How Do Trait Change Patterns Affect the Performance of Adaptive Measurement of Change? JF - Journal of Computerized Adaptive Testing Y1 - 2023 A1 - Ming Him Tai A1 - Allison W. Cooperman A1 - Joseph N. DeWeese A1 - David J. Weiss KW - adaptive measurement of change KW - computerized adaptive testing KW - longitudinal measurement KW - trait change patterns VL - 10 IS - 3 ER - TY - JOUR T1 - The (non)Impact of Misfitting Items in Computerized Adaptive Testing JF - Journal of Computerized Adaptive Testing Y1 - 2022 A1 - Christine E. DeMars KW - computerized adaptive testing KW - item fit KW - three-parameter logistic model VL - 9 UR - https://jcatpub.net/index.php/jcat/issue/view/26 IS - 2 ER - TY - JOUR T1 - Time-Efficient Adaptive Measurement of Change JF - Journal of Computerized Adaptive Testing Y1 - 2019 A1 - Matthew Finkelman A1 - Chun Wang KW - adaptive measurement of change KW - computerized adaptive testing KW - Fisher information KW - item selection KW - response-time modeling AB -

The adaptive measurement of change (AMC) refers to the use of computerized adaptive testing (CAT) at multiple occasions to efficiently assess a respondent’s improvement, decline, or sameness from occasion to occasion. Whereas previous AMC research focused on administering the most informative item to a respondent at each stage of testing, the current research proposes the use of Fisher information per time unit as an item selection procedure for AMC. The latter procedure incorporates not only the amount of information provided by a given item but also the expected amount of time required to complete it. In a simulation study, the use of Fisher information per time unit item selection resulted in a lower false positive rate in the majority of conditions studied, and a higher true positive rate in all conditions studied, compared to item selection via Fisher information without accounting for the expected time taken. Future directions of research are suggested.

VL - 7 UR - http://iacat.org/jcat/index.php/jcat/article/view/73/35 IS - 2 ER - TY - JOUR T1 - catR: An R Package for Computerized Adaptive Testing JF - Applied Psychological Measurement Y1 - 2011 A1 - Magis, D. A1 - Raîche, G. KW - computer program KW - computerized adaptive testing KW - Estimation KW - Item Response Theory AB -

Computerized adaptive testing (CAT) is an active current research field in psychometrics and educational measurement. However, there is very little software available to handle such adaptive tasks. The R package catR was developed to perform adaptive testing with as much flexibility as possible, in an attempt to provide a developmental and testing platform to the interested user. Several item-selection rules and ability estimators are implemented. The item bank can be provided by the user or randomly generated from parent distributions of item parameters. Three stopping rules are available. The output can be graphically displayed.

ER - TY - JOUR T1 - Bayesian item selection in constrained adaptive testing JF - Psicologica Y1 - 2010 A1 - Veldkamp, B. P. KW - computerized adaptive testing AB - Application of Bayesian item selection criteria in computerized adaptive testing might result in improvement of bias and MSE of the ability estimates. The question remains how to apply Bayesian item selection criteria in the context of constrained adaptive testing, where large numbers of specifications have to be taken into account in the item selection process. The Shadow Test Approach is a general purpose algorithm for administering constrained CAT. In this paper it is shown how the approach can be slightly modified to handle Bayesian item selection criteria. No differences in performance were found between the shadow test approach and the modifiedapproach. In a simulation study of the LSAT, the effects of Bayesian item selection criteria are illustrated. The results are compared to item selection based on Fisher Information. General recommendations about the use of Bayesian item selection criteria are provided. VL - 31 ER - TY - JOUR T1 - Detection of aberrant item score patterns in computerized adaptive testing: An empirical example using the CUSUM JF - Personality and Individual Differences Y1 - 2010 A1 - Egberink, I. J. L. A1 - Meijer, R. R. A1 - Veldkamp, B. P. A1 - Schakel, L. A1 - Smid, N. G. KW - CAT KW - computerized adaptive testing KW - CUSUM approach KW - person Fit AB - The scalability of individual trait scores on a computerized adaptive test (CAT) was assessed through investigating the consistency of individual item score patterns. A sample of N = 428 persons completed a personality CAT as part of a career development procedure. To detect inconsistent item score patterns, we used a cumulative sum (CUSUM) procedure. Combined information from the CUSUM, other personality measures, and interviews showed that similar estimated trait values may have a different interpretation.Implications for computer-based assessment are discussed. VL - 48 SN - 01918869 ER - TY - JOUR T1 - Item Selection and Hypothesis Testing for the Adaptive Measurement of Change JF - Applied Psychological Measurement Y1 - 2010 A1 - Finkelman, M. D. A1 - Weiss, D. J. A1 - Kim-Kang, G. KW - change KW - computerized adaptive testing KW - individual change KW - Kullback–Leibler information KW - likelihood ratio KW - measuring change AB -

Assessing individual change is an important topic in both psychological and educational measurement. An adaptive measurement of change (AMC) method had previously been shown to exhibit greater efficiency in detecting change than conventional nonadaptive methods. However, little work had been done to compare different procedures within the AMC framework. This study introduced a new item selection criterion and two new test statistics for detecting change with AMC that were specifically designed for the paradigm of hypothesis testing. In two simulation sets, the new methods for detecting significant change improved on existing procedures by demonstrating better adherence to Type I error rates and substantially better power for detecting relatively small change. 

VL - 34 IS - 4 ER - TY - JOUR T1 - The D-optimality item selection criterion in the early stage of CAT: A study with the graded response model JF - Journal of Educational and Behavioral Statistics Y1 - 2008 A1 - Passos, V. L. A1 - Berger, M. P. F. A1 - Tan, F. E. S. KW - computerized adaptive testing KW - D optimality KW - item selection AB - During the early stage of computerized adaptive testing (CAT), item selection criteria based on Fisher’s information often produce less stable latent trait estimates than the Kullback-Leibler global information criterion. Robustness against early stage instability has been reported for the D-optimality criterion in a polytomous CAT with the Nominal Response Model and is shown herein to be reproducible for the Graded Response Model. For comparative purposes, the A-optimality and the global information criteria are also applied. Their item selection is investigated as a function of test progression and item bank composition. The results indicate how the selection of specific item parameters underlies the criteria performances evaluated via accuracy and precision of estimation. In addition, the criteria item exposure rates are compared, without the use of any exposure controlling measure. On the account of stability, precision, accuracy, numerical simplicity, and less evidently, item exposure rate, the D-optimality criterion can be recommended for CAT. VL - 33 ER - TY - JOUR T1 - ICAT: An adaptive testing procedure for the identification of idiosyncratic knowledge patterns JF - Zeitschrift für Psychologie Y1 - 2008 A1 - Kingsbury, G. G. A1 - Houser, R.L. KW - computerized adaptive testing AB -

Traditional adaptive tests provide an efficient method for estimating student achievements levels, by adjusting the characteristicsof the test questions to match the performance of each student. These traditional adaptive tests are not designed to identify diosyncraticknowledge patterns. As students move through their education, they learn content in any number of different ways related to their learning style and cognitive development. This may result in a student having different achievement levels from one content area to another within a domain of content. This study investigates whether such idiosyncratic knowledge patterns exist. It discusses the differences between idiosyncratic knowledge patterns and multidimensionality. Finally, it proposes an adaptive testing procedure that can be used to identify a student’s areas of strength and weakness more efficiently than current adaptive testing approaches. The findings of the study indicate that a fairly large number of students may have test results that are influenced by their idiosyncratic knowledge patterns. The findings suggest that these patterns persist across time for a large number of students, and that the differences in student performance between content areas within a subject domain are large enough to allow them to be useful in instruction. Given the existence of idiosyncratic patterns of knowledge, the proposed testing procedure may enable us to provide more useful information to teachers. It should also allow us to differentiate between idiosyncratic patterns or knowledge, and important mutidimensionality in the testing data.

VL - 216 ER - TY - JOUR T1 - Some new developments in adaptive testing technology JF - Zeitschrift für Psychologie Y1 - 2008 A1 - van der Linden, W. J. KW - computerized adaptive testing AB -

In an ironic twist of history, modern psychological testing has returned to an adaptive format quite common when testing was not yet standardized. Important stimuli to the renewed interest in adaptive testing have been the development of item-response theory in psychometrics, which models the responses on test items using separate parameters for the items and test takers, and the use of computers in test administration, which enables us to estimate the parameter for a test taker and select the items in real time. This article reviews a selection from the latest developments in the technology of adaptive testing, such as constrained adaptive item selection, adaptive testing using rule-based item generation, multidimensional adaptive testing, adaptive use of test batteries, and the use of response times in adaptive testing.

VL - 216 ER - TY - Generic T1 - Computerized classification testing with composite hypotheses T2 - GMAC Conference on Computerized Adaptive Testing Y1 - 2007 A1 - Thompson, N. A. A1 - Ro, S. KW - computerized adaptive testing JF - GMAC Conference on Computerized Adaptive Testing PB - Graduate Management Admissions Council CY - St. Paul, MN N1 - Proceedings of the 2007 GMAC Conference on Computerized Adaptive Testing. Retrieved [date] from www. psych. umn. edu/psylabs/CATCentral ER - TY - JOUR T1 - Methods for restricting maximum exposure rate in computerized adaptative testing JF - Methodology: European Journal of Research Methods for the Behavioral and Social Sciences Y1 - 2007 A1 - Barrada, J A1 - Olea, J. A1 - Ponsoda, V. KW - computerized adaptive testing KW - item bank security KW - item exposure control KW - overlap rate KW - Sympson-Hetter method AB - The Sympson-Hetter (1985) method provides a means of controlling maximum exposure rate of items in Computerized Adaptive Testing. Through a series of simulations, control parameters are set that mark the probability of administration of an item on being selected. This method presents two main problems: it requires a long computation time for calculating the parameters and the maximum exposure rate is slightly above the fixed limit. Van der Linden (2003) presented two alternatives which appear to solve both of the problems. The impact of these methods in the measurement accuracy has not been tested yet. We show how these methods over-restrict the exposure of some highly discriminating items and, thus, the accuracy is decreased. It also shown that, when the desired maximum exposure rate is near the minimum possible value, these methods offer an empirical maximum exposure rate clearly above the goal. A new method, based on the initial estimation of the probability of administration and the probability of selection of the items with the restricted method (Revuelta & Ponsoda, 1998), is presented in this paper. It can be used with the Sympson-Hetter method and with the two van der Linden's methods. This option, when used with Sympson-Hetter, speeds the convergence of the control parameters without decreasing the accuracy. (PsycINFO Database Record (c) 2007 APA, all rights reserved) PB - Hogrefe & Huber Publishers GmbH: Germany VL - 3 SN - 1614-1881 (Print); 1614-2241 (Electronic) ER - TY - Generic T1 - A practitioner's guide to variable-length computerized classification testing Y1 - 2007 A1 - Thompson, N. A. KW - CAT KW - classification KW - computer adaptive testing KW - computerized adaptive testing KW - Computerized classification testing AB - Variable-length computerized classification tests, CCTs, (Lin & Spray, 2000; Thompson, 2006) are a powerful and efficient approach to testing for the purpose of classifying examinees into groups. CCTs are designed by the specification of at least five technical components: psychometric model, calibrated item bank, starting point, item selection algorithm, and termination criterion. Several options exist for each of these CCT components, creating a myriad of possible designs. Confusion among designs is exacerbated by the lack of a standardized nomenclature. This article outlines the components of a CCT, common options for each component, and the interaction of options for different components, so that practitioners may more efficiently design CCTs. It also offers a suggestion of nomenclature. JF - Practical Assessment, Research and Evaluation VL - 12 ER - TY - JOUR T1 - Test design optimization in CAT early stage with the nominal response model JF - Applied Psychological Measurement Y1 - 2007 A1 - Passos, V. L. A1 - Berger, M. P. F. A1 - Tan, F. E. KW - computerized adaptive testing KW - nominal response model KW - robust performance KW - test design optimization AB - The early stage of computerized adaptive testing (CAT) refers to the phase of the trait estimation during the administration of only a few items. This phase can be characterized by bias and instability of estimation. In this study, an item selection criterion is introduced in an attempt to lessen this instability: the D-optimality criterion. A polytomous unconstrained CAT simulation is carried out to evaluate this criterion's performance under different test premises. The simulation shows that the extent of early stage instability depends primarily on the quality of the item pool information and its size and secondarily on the item selection criteria. The efficiency of the D-optimality criterion is similar to the efficiency of other known item selection criteria. Yet, it often yields estimates that, at the beginning of CAT, display a more robust performance against instability. (PsycINFO Database Record (c) 2007 APA, all rights reserved) PB - Sage Publications: US VL - 31 SN - 0146-6216 (Print) ER - TY - JOUR T1 - Adaptive success control in computerized adaptive testing JF - Psychology Science Y1 - 2006 A1 - Häusler, Joachim KW - adaptive success control KW - computerized adaptive testing KW - Psychometrics AB - In computerized adaptive testing (CAT) procedures within the framework of probabilistic test theory the difficulty of an item is adjusted to the ability of the respondent, with the aim of maximizing the amount of information generated per item, thereby also increasing test economy and test reasonableness. However, earlier research indicates that respondents might feel over-challenged by a constant success probability of p = 0.5 and therefore cannot come to a sufficiently high answer certainty within a reasonable timeframe. Consequently response time per item increases, which -- depending on the test material -- can outweigh the benefit of administering optimally informative items. Instead of a benefit, the result of using CAT procedures could be a loss of test economy. Based on this problem, an adaptive success control algorithm was designed and tested, adapting the success probability to the working style of the respondent. Persons who need higher answer certainty in order to come to a decision are detected and receive a higher success probability, in order to minimize the test duration (not the number of items as in classical CAT). The method is validated on the re-analysis of data from the Adaptive Matrices Test (AMT, Hornke, Etzel & Rettig, 1999) and by the comparison between an AMT version using classical CAT and an experimental version using Adaptive Success Control. The results are discussed in the light of psychometric and psychological aspects of test quality. (PsycINFO Database Record (c) 2007 APA, all rights reserved) PB - Pabst Science Publishers: Germany VL - 48 SN - 0033-3018 (Print) ER - TY - JOUR T1 - Assembling a computerized adaptive testing item pool as a set of linear tests JF - Journal of Educational and Behavioral Statistics Y1 - 2006 A1 - van der Linden, W. J. A1 - Ariel, A. A1 - Veldkamp, B. P. KW - Algorithms KW - computerized adaptive testing KW - item pool KW - linear tests KW - mathematical models KW - statistics KW - Test Construction KW - Test Items AB - Test-item writing efforts typically results in item pools with an undesirable correlational structure between the content attributes of the items and their statistical information. If such pools are used in computerized adaptive testing (CAT), the algorithm may be forced to select items with less than optimal information, that violate the content constraints, and/or have unfavorable exposure rates. Although at first sight somewhat counterintuitive, it is shown that if the CAT pool is assembled as a set of linear test forms, undesirable correlations can be broken down effectively. It is proposed to assemble such pools using a mixed integer programming model with constraints that guarantee that each test meets all content specifications and an objective function that requires them to have maximal information at a well-chosen set of ability values. An empirical example with a previous master pool from the Law School Admission Test (LSAT) yielded a CAT with nearly uniform bias and mean-squared error functions for the ability estimator and item-exposure rates that satisfied the target for all items in the pool. PB - Sage Publications: US VL - 31 SN - 1076-9986 (Print) ER - TY - JOUR T1 - Comparing methods of assessing differential item functioning in a computerized adaptive testing environment JF - Journal of Educational Measurement Y1 - 2006 A1 - Lei, P-W. A1 - Chen, S-Y. A1 - Yu, L. KW - computerized adaptive testing KW - educational testing KW - item response theory likelihood ratio test KW - logistic regression KW - trait estimation KW - unidirectional & non-unidirectional differential item functioning AB - Mantel-Haenszel and SIBTEST, which have known difficulty in detecting non-unidirectional differential item functioning (DIF), have been adapted with some success for computerized adaptive testing (CAT). This study adapts logistic regression (LR) and the item-response-theory-likelihood-ratio test (IRT-LRT), capable of detecting both unidirectional and non-unidirectional DIF, to the CAT environment in which pretest items are assumed to be seeded in CATs but not used for trait estimation. The proposed adaptation methods were evaluated with simulated data under different sample size ratios and impact conditions in terms of Type I error, power, and specificity in identifying the form of DIF. The adapted LR and IRT-LRT procedures are more powerful than the CAT version of SIBTEST for non-unidirectional DIF detection. The good Type I error control provided by IRT-LRT under extremely unequal sample sizes and large impact is encouraging. Implications of these and other findings are discussed. all rights reserved) PB - Blackwell Publishing: United Kingdom VL - 43 SN - 0022-0655 (Print) ER - TY - JOUR T1 - The comparison among item selection strategies of CAT with multiple-choice items JF - Acta Psychologica Sinica Y1 - 2006 A1 - Hai-qi, D. A1 - De-zhi, C. A1 - Shuliang, D. A1 - Taiping, D. KW - CAT KW - computerized adaptive testing KW - graded response model KW - item selection strategies KW - multiple choice items AB - The initial purpose of comparing item selection strategies for CAT was to increase the efficiency of tests. As studies continued, however, it was found that increasing the efficiency of item bank using was also an important goal of comparing item selection strategies. These two goals often conflicted. The key solution was to find a strategy with which both goals could be accomplished. The item selection strategies for graded response model in this study included: the average of the difficulty orders matching with the ability; the medium of the difficulty orders matching with the ability; maximum information; A stratified (average); and A stratified (medium). The evaluation indexes used for comparison included: the bias of ability estimates for the true; the standard error of ability estimates; the average items which the examinees have administered; the standard deviation of the frequency of items selected; and sum of the indices weighted. Using the Monte Carlo simulation method, we obtained some data and computer iterated the data 20 times each under the conditions that the item difficulty parameters followed the normal distribution and even distribution. The results were as follows; The results indicated that no matter difficulty parameters followed the normal distribution or even distribution. Every type of item selection strategies designed in this research had its strong and weak points. In general evaluation, under the condition that items were stratified appropriately, A stratified (medium) (ASM) had the best effect. (PsycINFO Database Record (c) 2007 APA, all rights reserved) PB - Science Press: China VL - 38 SN - 0439-755X (Print) ER - TY - JOUR T1 - Equating scores from adaptive to linear tests JF - Applied Psychological Measurement Y1 - 2006 A1 - van der Linden, W. J. KW - computerized adaptive testing KW - equipercentile equating KW - local equating KW - score reporting KW - test characteristic function AB - Two local methods for observed-score equating are applied to the problem of equating an adaptive test to a linear test. In an empirical study, the methods were evaluated against a method based on the test characteristic function (TCF) of the linear test and traditional equipercentile equating applied to the ability estimates on the adaptive test for a population of test takers. The two local methods were generally best. Surprisingly, the TCF method performed slightly worse than the equipercentile method. Both methods showed strong bias and uniformly large inaccuracy, but the TCF method suffered from extra error due to the lower asymptote of the test characteristic function. It is argued that the worse performances of the two methods are a consequence of the fact that they use a single equating transformation for an entire population of test takers and therefore have to compromise between the individual score distributions. PB - Sage Publications: US VL - 30 SN - 0146-6216 (Print) ER - TY - JOUR T1 - Expansion of a physical function item bank and development of an abbreviated form for clinical research JF - Journal of Applied Measurement Y1 - 2006 A1 - Bode, R. K. A1 - Lai, J-S. A1 - Dineen, K. A1 - Heinemann, A. W. A1 - Shevrin, D. A1 - Von Roenn, J. A1 - Cella, D. KW - clinical research KW - computerized adaptive testing KW - performance levels KW - physical function item bank KW - Psychometrics KW - test reliability KW - Test Validity AB - We expanded an existing 33-item physical function (PF) item bank with a sufficient number of items to enable computerized adaptive testing (CAT). Ten items were written to expand the bank and the new item pool was administered to 295 people with cancer. For this analysis of the new pool, seven poorly performing items were identified for further examination. This resulted in a bank with items that define an essentially unidimensional PF construct, cover a wide range of that construct, reliably measure the PF of persons with cancer, and distinguish differences in self-reported functional performance levels. We also developed a 5-item (static) assessment form ("BriefPF") that can be used in clinical research to express scores on the same metric as the overall bank. The BriefPF was compared to the PF-10 from the Medical Outcomes Study SF-36. Both short forms significantly differentiated persons across functional performance levels. While the entire bank was more precise across the PF continuum than either short form, there were differences in the area of the continuum in which each short form was more precise: the BriefPF was more precise than the PF-10 at the lower functional levels and the PF-10 was more precise than the BriefPF at the higher levels. Future research on this bank will include the development of a CAT version, the PF-CAT. (PsycINFO Database Record (c) 2007 APA, all rights reserved) PB - Richard M Smith: US VL - 7 SN - 1529-7713 (Print) ER - TY - JOUR T1 - [Item Selection Strategies of Computerized Adaptive Testing based on Graded Response Model.] JF - Acta Psychologica Sinica Y1 - 2006 A1 - Ping, Chen A1 - Shuliang, Ding A1 - Haijing, Lin A1 - Jie, Zhou KW - computerized adaptive testing KW - item selection strategy AB - Item selection strategy (ISS) is an important component of Computerized Adaptive Testing (CAT). Its performance directly affects the security, efficiency and precision of the test. Thus, ISS becomes one of the central issues in CATs based on the Graded Response Model (GRM). It is well known that the goal of IIS is to administer the next unused item remaining in the item bank that best fits the examinees current ability estimate. In dichotomous IRT models, every item has only one difficulty parameter and the item whose difficulty matches the examinee's current ability estimate is considered to be the best fitting item. However, in GRM, each item has more than two ordered categories and has no single value to represent the item difficulty. Consequently, some researchers have used to employ the average or the median difficulty value across categories as the difficulty estimate for the item. Using the average value and the median value in effect introduced two corresponding ISSs. In this study, we used computer simulation compare four ISSs based on GRM. We also discussed the effect of "shadow pool" on the uniformity of pool usage as well as the influence of different item parameter distributions and different ability estimation methods on the evaluation criteria of CAT. In the simulation process, Monte Carlo method was adopted to simulate the entire CAT process; 1,000 examinees drawn from standard normal distribution and four 1,000-sized item pools of different item parameter distributions were also simulated. The assumption of the simulation is that a polytomous item is comprised of six ordered categories. In addition, ability estimates were derived using two methods. They were expected a posteriori Bayesian (EAP) and maximum likelihood estimation (MLE). In MLE, the Newton-Raphson iteration method and the Fisher Score iteration method were employed, respectively, to solve the likelihood equation. Moreover, the CAT process was simulated with each examinee 30 times to eliminate random error. The IISs were evaluated by four indices usually used in CAT from four aspects--the accuracy of ability estimation, the stability of IIS, the usage of item pool, and the test efficiency. Simulation results showed adequate evaluation of the ISS that matched the estimate of an examinee's current trait level with the difficulty values across categories. Setting "shadow pool" in ISS was able to improve the uniformity of pool utilization. Finally, different distributions of the item parameter and different ability estimation methods affected the evaluation indices of CAT. (PsycINFO Database Record (c) 2007 APA, all rights reserved) PB - Science Press: China VL - 38 SN - 0439-755X (Print) ER - TY - JOUR T1 - Simulated computerized adaptive test for patients with lumbar spine impairments was efficient and produced valid measures of function JF - Journal of Clinical Epidemiology Y1 - 2006 A1 - Hart, D. L. A1 - Mioduski, J. E. A1 - Werneke, M. W. A1 - Stratford, P. W. KW - Back Pain Functional Scale KW - computerized adaptive testing KW - Item Response Theory KW - Lumbar spine KW - Rehabilitation KW - True-score equating AB - Objective: To equate physical functioning (PF) items with Back Pain Functional Scale (BPFS) items, develop a computerized adaptive test (CAT) designed to assess lumbar spine functional status (LFS) in people with lumbar spine impairments, and compare discriminant validity of LFS measures (qIRT) generated using all items analyzed with a rating scale Item Response Theory model (RSM) and measures generated using the simulated CAT (qCAT). Methods: We performed a secondary analysis of retrospective intake rehabilitation data. Results: Unidimensionality and local independence of 25 BPFS and PF items were supported. Differential item functioning was negligible for levels of symptom acuity, gender, age, and surgical history. The RSM fit the data well. A lumbar spine specific CAT was developed that was 72% more efficient than using all 25 items to estimate LFS measures. qIRT and qCAT measures did not discriminate patients by symptom acuity, age, or gender, but discriminated patients by surgical history in similar clinically logical ways. qCAT measures were as precise as qIRT measures. Conclusion: A body part specific simulated CAT developed from an LFS item bank was efficient and produced precise measures of LFS without eroding discriminant validity. VL - 59 ER - TY - JOUR T1 - Simulated computerized adaptive test for patients with shoulder impairments was efficient and produced valid measures of function JF - Journal of Clinical Epidemiology Y1 - 2006 A1 - Hart, D. L. A1 - Cook, K. F. A1 - Mioduski, J. E. A1 - Teal, C. R. A1 - Crane, P. K. KW - computerized adaptive testing KW - Flexilevel Scale of Shoulder Function KW - Item Response Theory KW - Rehabilitation AB -

Background and Objective: To test unidimensionality and local independence of a set of shoulder functional status (SFS) items,
develop a computerized adaptive test (CAT) of the items using a rating scale item response theory model (RSM), and compare discriminant validity of measures generated using all items (qIRT) and measures generated using the simulated CAT (qCAT).
Study Design and Setting: We performed a secondary analysis of data collected prospectively during rehabilitation of 400 patients
with shoulder impairments who completed 60 SFS items.
Results: Factor analytic techniques supported that the 42 SFS items formed a unidimensional scale and were locally independent. Except for five items, which were deleted, the RSM fit the data well. The remaining 37 SFS items were used to generate the CAT. On average, 6 items on were needed to estimate precise measures of function using the SFS CAT, compared with all 37 SFS items. The qIRT and qCAT measures were highly correlated (r 5 .96) and resulted in similar classifications of patients.
Conclusion: The simulated SFS CAT was efficient and produced precise, clinically relevant measures of functional status with good
discriminating ability. 

VL - 59 IS - 3 ER - TY - JOUR T1 - Increasing the homogeneity of CAT's item-exposure rates by minimizing or maximizing varied target functions while assembling shadow tests JF - Journal of Educational Measurement Y1 - 2005 A1 - Li, Y. H. A1 - Schafer, W. D. KW - algorithm KW - computerized adaptive testing KW - item exposure rate KW - shadow test KW - varied target function AB - A computerized adaptive testing (CAT) algorithm that has the potential to increase the homogeneity of CATs item-exposure rates without significantly sacrificing the precision of ability estimates was proposed and assessed in the shadow-test (van der Linden & Reese, 1998) CAT context. This CAT algorithm was formed by a combination of maximizing or minimizing varied target functions while assembling shadow tests. There were four target functions to be separately used in the first, second, third, and fourth quarter test of CAT. The elements to be used in the four functions were associated with (a) a random number assigned to each item, (b) the absolute difference between an examinee's current ability estimate and an item difficulty, (c) the absolute difference between an examinee's current ability estimate and an optimum item difficulty, and (d) item information. The results indicated that this combined CAT fully utilized all the items in the pool, reduced the maximum exposure rates, and achieved more homogeneous exposure rates. Moreover, its precision in recovering ability estimates was similar to that of the maximum item-information method. The combined CAT method resulted in the best overall results compared with the other individual CAT item-selection methods. The findings from the combined CAT are encouraging. Future uses are discussed. (PsycINFO Database Record (c) 2007 APA, all rights reserved) PB - Blackwell Publishing: United Kingdom VL - 42 SN - 0022-0655 (Print) ER - TY - JOUR T1 - An item response theory-based pain item bank can enhance measurement precision JF - Journal of Pain and Symptom Management Y1 - 2005 A1 - Lai, J-S. A1 - Dineen, K. A1 - Reeve, B. B. A1 - Von Roenn, J. A1 - Shervin, D. A1 - McGuire, M. A1 - Bode, R. K. A1 - Paice, J. A1 - Cella, D. KW - computerized adaptive testing AB - Cancer-related pain is often under-recognized and undertreated. This is partly due to the lack of appropriate assessments, which need to be comprehensive and precise yet easily integrated into clinics. Computerized adaptive testing (CAT) can enable precise-yet-brief assessments by only selecting the most informative items from a calibrated item bank. The purpose of this study was to create such a bank. The sample included 400 cancer patients who were asked to complete 61 pain-related items. Data were analyzed using factor analysis and the Rasch model. The final bank consisted of 43 items which satisfied the measurement requirement of factor analysis and the Rasch model, demonstrated high internal consistency and reasonable item-total correlations, and discriminated patients with differing degrees of pain. We conclude that this bank demonstrates good psychometric properties, is sensitive to pain reported by patients, and can be used as the foundation for a CAT pain-testing platform for use in clinical practice. VL - 30 N1 - 0885-3924Journal Article ER - TY - JOUR T1 - The promise of PROMIS: using item response theory to improve assessment of patient-reported outcomes JF - Clinical and Experimental Rheumatology Y1 - 2005 A1 - Fries, J.F. A1 - Bruce, B. A1 - Cella, D. KW - computerized adaptive testing AB - PROMIS (Patient-Reported-Outcomes Measurement Information System) is an NIH Roadmap network project intended to improve the reliability, validity, and precision of PROs and to provide definitive new instruments that will exceed the capabilities of classic instruments and enable improved outcome measurement for clinical research across all NIH institutes. Item response theory (IRT) measurement models now permit us to transition conventional health status assessment into an era of item banking and computerized adaptive testing (CAT). Item banking uses IRT measurement models and methods to develop item banks from large pools of items from many available questionnaires. IRT allows the reduction and improvement of items and assembles domains of items which are unidimensional and not excessively redundant. CAT provides a model-driven algorithm and software to iteratively select the most informative remaining item in a domain until a desired degree of precision is obtained. Through these approaches the number of patients required for a clinical trial may be reduced while holding statistical power constant. PROMIS tools, expected to improve precision and enable assessment at the individual patient level which should broaden the appeal of PROs, will begin to be available to the general medical community in 2008. VL - 23 ER - TY - JOUR T1 - Computerized adaptive testing with multiple-form structures JF - Applied Psychological Measurement Y1 - 2004 A1 - Armstrong, R. D. A1 - Jones, D. H. A1 - Koppel, N. B. A1 - Pashley, P. J. KW - computerized adaptive testing KW - Law School Admission Test KW - multiple-form structure KW - testlets AB - A multiple-form structure (MFS) is an ordered collection or network of testlets (i.e., sets of items). An examinee's progression through the network of testlets is dictated by the correctness of an examinee's answers, thereby adapting the test to his or her trait level. The collection of paths through the network yields the set of all possible test forms, allowing test specialists the opportunity to review them before they are administered. Also, limiting the exposure of an individual MFS to a specific period of time can enhance test security. This article provides an overview of methods that have been developed to generate parallel MFSs. The approach is applied to the assembly of an experimental computerized Law School Admission Test (LSAT). (PsycINFO Database Record (c) 2007 APA, all rights reserved) PB - Sage Publications: US VL - 28 SN - 0146-6216 (Print) ER - TY - JOUR T1 - Constraining item exposure in computerized adaptive testing with shadow tests JF - Journal of Educational and Behavioral Statistics Y1 - 2004 A1 - van der Linden, W. J. A1 - Veldkamp, B. P. KW - computerized adaptive testing KW - item exposure control KW - item ineligibility constraints KW - Probability KW - shadow tests AB - Item-exposure control in computerized adaptive testing is implemented by imposing item-ineligibility constraints on the assembly process of the shadow tests. The method resembles Sympson and Hetter’s (1985) method of item-exposure control in that the decisions to impose the constraints are probabilistic. The method does not, however, require time-consuming simulation studies to set values for control parameters before the operational use of the test. Instead, it can set the probabilities of item ineligibility adaptively during the test using the actual item-exposure rates. An empirical study using an item pool from the Law School Admission Test showed that application of the method yielded perfect control of the item-exposure rates and had negligible impact on the bias and mean-squared error functions of the ability estimator. PB - American Educational Research Assn: US VL - 29 SN - 1076-9986 (Print) ER - TY - JOUR T1 - The development and evaluation of a software prototype for computer-adaptive testing JF - Computers and Education Y1 - 2004 A1 - Lilley, M A1 - Barker, T A1 - Britton, C KW - computerized adaptive testing VL - 43 ER - TY - JOUR T1 - Effects of practical constraints on item selection rules at the early stages of computerized adaptive testing JF - Journal of Educational Measurement Y1 - 2004 A1 - Chen, S-Y. A1 - Ankenmann, R. D. KW - computerized adaptive testing KW - item selection rules KW - practical constraints AB - The purpose of this study was to compare the effects of four item selection rules--(1) Fisher information (F), (2) Fisher information with a posterior distribution (FP), (3) Kullback-Leibler information with a posterior distribution (KP), and (4) completely randomized item selection (RN)--with respect to the precision of trait estimation and the extent of item usage at the early stages of computerized adaptive testing. The comparison of the four item selection rules was carried out under three conditions: (1) using only the item information function as the item selection criterion; (2) using both the item information function and content balancing; and (3) using the item information function, content balancing, and item exposure control. When test length was less than 10 items, FP and KP tended to outperform F at extreme trait levels in Condition 1. However, in more realistic settings, it could not be concluded that FP and KP outperformed F, especially when item exposure control was imposed. When test length was greater than 10 items, the three nonrandom item selection procedures performed similarly no matter what the condition was, while F had slightly higher item usage. (PsycINFO Database Record (c) 2007 APA, all rights reserved) PB - Blackwell Publishing: United Kingdom VL - 41 SN - 0022-0655 (Print) ER - TY - Generic T1 - An investigation of two combination procedures of SPRT for three-category classification decisions in computerized classification test T2 - annual meeting of the American Educational Research Association Y1 - 2004 A1 - Jiao, H. A1 - Wang, S A1 - Lau, CA KW - computerized adaptive testing KW - Computerized classification testing KW - sequential probability ratio testing JF - annual meeting of the American Educational Research Association CY - San Antonio, Texas N1 - annual meeting of the American Educational Research Association, San Antonio ER - TY - JOUR T1 - Siette: a web-based tool for adaptive testing JF - International Journal of Artificial Intelligence in Education Y1 - 2004 A1 - Conejo, R A1 - Guzmán, E A1 - Millán, E A1 - Trella, M A1 - Pérez-De-La-Cruz, JL A1 - Ríos, A KW - computerized adaptive testing VL - 14 ER - TY - JOUR T1 - Strategies for controlling item exposure in computerized adaptive testing with the generalized partial credit model JF - Applied Psychological Measurement Y1 - 2004 A1 - Davis, L. L. KW - computerized adaptive testing KW - generalized partial credit model KW - item exposure AB - Choosing a strategy for controlling item exposure has become an integral part of test development for computerized adaptive testing (CAT). This study investigated the performance of six procedures for controlling item exposure in a series of simulated CATs under the generalized partial credit model. In addition to a no-exposure control baseline condition, the randomesque, modified-within-.10-logits, Sympson-Hetter, conditional Sympson-Hetter, a-stratified with multiple-stratification, and enhanced a-stratified with multiple-stratification procedures were implemented to control exposure rates. Two variations of the randomesque and modified-within-.10-logits procedures were examined, which varied the size of the item group from which the next item to be administered was randomly selected. The results indicate that although the conditional Sympson-Hetter provides somewhat lower maximum exposure rates, the randomesque and modified-within-.10-logits procedures with the six-item group variation have great utility for controlling overlap rates and increasing pool utilization and should be given further consideration. (PsycINFO Database Record (c) 2007 APA, all rights reserved) PB - Sage Publications: US VL - 28 SN - 0146-6216 (Print) ER - TY - JOUR T1 - Computerized adaptive testing with item cloning JF - Applied Psychological Measurement Y1 - 2003 A1 - Glas, C. A. W. A1 - van der Linden, W. J. KW - computerized adaptive testing AB - (from the journal abstract) To increase the number of items available for adaptive testing and reduce the cost of item writing, the use of techniques of item cloning has been proposed. An important consequence of item cloning is possible variability between the item parameters. To deal with this variability, a multilevel item response (IRT) model is presented which allows for differences between the distributions of item parameters of families of item clones. A marginal maximum likelihood and a Bayesian procedure for estimating the hyperparameters are presented. In addition, an item-selection procedure for computerized adaptive testing with item cloning is presented which has the following two stages: First, a family of item clones is selected to be optimal at the estimate of the person parameter. Second, an item is randomly selected from the family for administration. Results from simulation studies based on an item pool from the Law School Admission Test (LSAT) illustrate the accuracy of these item pool calibration and adaptive testing procedures. (PsycINFO Database Record (c) 2003 APA, all rights reserved). VL - 27 N1 - References .Sage Publications, US ER - TY - JOUR T1 - Incorporation of Content Balancing Requirements in Stratification Designs for Computerized Adaptive Testing JF - Educational and Psychological Measurement Y1 - 2003 A1 - Leung, C-K.. A1 - Chang, Hua-Hua A1 - Hau, K-T. KW - computerized adaptive testing AB - Studied three stratification designs for computerized adaptive testing in conjunction with three well-developed content balancing methods. Simulation study results show substantial differences in item overlap rate and pool utilization among different methods. Recommends an optimal combination of stratification design and content balancing method. (SLD) VL - 63 ER - TY - CHAP T1 - Item selection in polytomous CAT T2 - New developments in psychometrics Y1 - 2003 A1 - Veldkamp, B. P. ED - A. Okada ED - K. Shigenasu ED - Y. Kano ED - J. Meulman KW - computerized adaptive testing JF - New developments in psychometrics PB - Psychometric Society, Springer CY - Tokyo, Japan ER - TY - JOUR T1 - Using response times to detect aberrant responses in computerized adaptive testing JF - Psychometrika Y1 - 2003 A1 - van der Linden, W. J. A1 - van Krimpen-Stoop, E. M. L. A. KW - Adaptive Testing KW - Behavior KW - Computer Assisted Testing KW - computerized adaptive testing KW - Models KW - person Fit KW - Prediction KW - Reaction Time AB - A lognormal model for response times is used to check response times for aberrances in examinee behavior on computerized adaptive tests. Both classical procedures and Bayesian posterior predictive checks are presented. For a fixed examinee, responses and response times are independent; checks based on response times offer thus information independent of the results of checks on response patterns. Empirical examples of the use of classical and Bayesian checks for detecting two different types of aberrances in response times are presented. The detection rates for the Bayesian checks outperformed those for the classical checks, but at the cost of higher false-alarm rates. A guideline for the choice between the two types of checks is offered. VL - 68 ER - TY - JOUR T1 - Computer adaptive testing: The impact of test characteristics on perceived performance and test takers' reactions JF - Dissertation Abstracts International: Section B: the Sciences & Engineering Y1 - 2002 A1 - Tonidandel, S. KW - computerized adaptive testing AB - This study examined the relationship between characteristics of adaptive testing and test takers' subsequent reactions to the test. Participants took a computer adaptive test in which two features, the difficulty of the initial item and the difficulty of subsequent items, were manipulated. These two features of adaptive testing determined the number of items answered correctly by examinees and their subsequent reactions to the test. The data show that the relationship between test characteristics and reactions was fully mediated by perceived performance on the test. In addition, the impact of feedback on reactions to adaptive testing was also evaluated. In general, feedback that was consistent with perceptions of performance had a positive impact on reactions to the test. Implications for adaptive test design concerning maximizing test takers' reactions are discussed. (PsycINFO Database Record (c) 2003 APA, all rights reserved). VL - 62 ER - TY - JOUR T1 - Computerised adaptive testing JF - British Journal of Educational Technology Y1 - 2002 A1 - Latu, E. A1 - Chapman, E. KW - computerized adaptive testing AB - Considers the potential of computer adaptive testing (CAT). Discusses the use of CAT instead of traditional paper and pencil tests, identifies decisions that impact the efficacy of CAT, and concludes that CAT is beneficial when used to its full potential on certain types of tests. (LRW) VL - 33 ER - TY - JOUR T1 - The effect of test characteristics on aberrant response patterns in computer adaptive testing JF - Dissertation Abstracts International Section A: Humanities & Social Sciences Y1 - 2002 A1 - Rizavi, S. M. KW - computerized adaptive testing AB - The advantages that computer adaptive testing offers over linear tests have been well documented. The Computer Adaptive Test (CAT) design is more efficient than the Linear test design as fewer items are needed to estimate an examinee's proficiency to a desired level of precision. In the ideal situation, a CAT will result in examinees answering different number of items according to the stopping rule employed. Unfortunately, the realities of testing conditions have necessitated the imposition of time and minimum test length limits on CATs. Such constraints might place a burden on the CAT test taker resulting in aberrant response behaviors by some examinees. Occurrence of such response patterns results in inaccurate estimation of examinee proficiency levels. This study examined the effects of test lengths, time limits and the interaction of these factors with the examinee proficiency levels on the occurrence of aberrant response patterns. The focus of the study was on the aberrant behaviors caused by rushed guessing due to restrictive time limits. Four different testing scenarios were examined; fixed length performance tests with and without content constraints, fixed length mastery tests and variable length mastery tests without content constraints. For each of these testing scenarios, the effect of two test lengths, five different timing conditions and the interaction between these factors with three ability levels on ability estimation were examined. For fixed and variable length mastery tests, decision accuracy was also looked at in addition to the estimation accuracy. Several indices were used to evaluate the estimation and decision accuracy for different testing conditions. The results showed that changing time limits had a significant impact on the occurrence of aberrant response patterns conditional on ability. Increasing test length had negligible if not negative effect on ability estimation when rushed guessing occured. In case of performance testing high ability examinees while in classification testing middle ability examinees suffered the most. The decision accuracy was considerably affected in case of variable length classification tests. (PsycINFO Database Record (c) 2003 APA, all rights reserved). VL - 62 ER - TY - CONF T1 - An empirical comparison of achievement level estimates from adaptive tests and paper-and-pencil tests T2 - annual meeting of the American Educational Research Association Y1 - 2002 A1 - Kingsbury, G. G. KW - computerized adaptive testing JF - annual meeting of the American Educational Research Association CY - New Orleans, LA. USA ER - TY - JOUR T1 - Evaluation of selection procedures for computerized adaptive testing with polytomous items JF - Applied Psychological Measurement Y1 - 2002 A1 - van Rijn, P. W. A1 - Theo Eggen A1 - Hemker, B. T. A1 - Sanders, P. F. KW - computerized adaptive testing AB - In the present study, a procedure that has been used to select dichotomous items in computerized adaptive testing was applied to polytomous items. This procedure was designed to select the item with maximum weighted information. In a simulation study, the item information function was integrated over a fixed interval of ability values and the item with the maximum area was selected. This maximum interval information item selection procedure was compared to a maximum point information item selection procedure. Substantial differences between the two item selection procedures were not found when computerized adaptive tests were evaluated on bias and the root mean square of the ability estimate. VL - 26 N1 - References .Sage Publications, US ER - TY - JOUR T1 - The implications of the use of non-optimal items in a Computer Adaptive Testing (CAT) environment JF - Dissertation Abstracts International: Section B: the Sciences & Engineering Y1 - 2002 A1 - Grodenchik, D. J. KW - computerized adaptive testing AB - This study describes the effects of manipulating item difficulty in a computer adaptive testing (CAT) environment. There are many potential benefits when using CATS as compared to traditional tests. These include increased security, shorter tests, and more precise measurement. According to IRT, the theory underlying CAT, as the computer continually recalculates ability, items that match that current estimate of ability are administered. Such items provide maximum information about examinees during the test. Herein, however, lies a potential problem. These optimal CAT items result in an examinee having only a 50% chance of a correct response. Some examinees may consider such items unduly challenging. Further, when test anxiety is a factor, it is possible that test scores may be negatively affected. This research was undertaken to determine the effects of administering easier CAT items on ability estimation and test length using computer simulations. Also considered was the administration of different numbers of initial items prior to the start of the adaptive portion of the test, using three different levels of measurement precision. Results indicate that regardless of the number of initial items administered, the level of precision employed, or the modifications made to item difficulty, the approximation of estimated ability to true ability is good in all cases. Additionally, the standard deviations of the ability estimates closely approximate the theoretical levels of precision used as stopping rules for the simulated CATs. Since optimal CAT items are not used, each item administered provides less information about examinees than optimal CAT items. This results in longer tests. Fortunately, using easier items that provide up to a 66.4% chance of a correct response results in tests that only modestly increase in length, across levels of precision. For larger standard errors, even easier items (up to a 73.5% chance of a correct response) result in only negligible to modest increases in test length. Examinees who find optimal CAT items difficult or examinees with test anxiety may find CATs that implement easier items enhance the already existing benefits of CAT. (PsycINFO Database Record (c) 2003 APA, all rights reserved). VL - 63 ER - TY - JOUR T1 - An item response model for characterizing test compromise JF - Journal of Educational and Behavioral Statistics Y1 - 2002 A1 - Segall, D. O. KW - computerized adaptive testing AB - This article presents an item response model for characterizing test-compromise that enables the estimation of item-preview and score-gain distributions observed in on-demand high-stakes testing programs. Model parameters and posterior distributions are estimated by Markov Chain Monte Carlo (MCMC) procedures. Results of a simulation study suggest that when at least some of the items taken by a small sample of test takers are known to be secure (uncompromised), the procedure can provide useful summaries of test-compromise and its impact on test scores. The article includes discussions of operational use of the proposed procedure, possible model violations and extensions, and application to computerized adaptive testing. VL - 27 N1 - References .American Educational Research Assn, US ER - TY - JOUR T1 - Outlier detection in high-stakes certification testing JF - Journal of Educational Measurement Y1 - 2002 A1 - Meijer, R. R. KW - Adaptive Testing KW - computerized adaptive testing KW - Educational Measurement KW - Goodness of Fit KW - Item Analysis (Statistical) KW - Item Response Theory KW - person Fit KW - Statistical Estimation KW - Statistical Power KW - Test Scores AB - Discusses recent developments of person-fit analysis in computerized adaptive testing (CAT). Methods from statistical process control are presented that have been proposed to classify an item score pattern as fitting or misfitting the underlying item response theory model in CAT Most person-fit research in CAT is restricted to simulated data. In this study, empirical data from a certification test were used. Alternatives are discussed to generate norms so that bounds can be determined to classify an item score pattern as fitting or misfitting. Using bounds determined from a sample of a high-stakes certification test, the empirical analysis showed that different types of misfit can be distinguished Further applications using statistical process control methods to detect misfitting item score patterns are discussed. (PsycINFO Database Record (c) 2005 APA ) VL - 39 ER - TY - JOUR T1 - Assessment in the twenty-first century: A role of computerised adaptive testing in national curriculum subjects JF - Teacher Development Y1 - 2001 A1 - Cowan, P. A1 - Morrison, H. KW - computerized adaptive testing AB - With the investment of large sums of money in new technologies forschools and education authorities and the subsequent training of teachers to integrate Information and Communications Technology (ICT) into their teaching strategies, it is remarkable that the old outdated models of assessment still remain. This article highlights the current problems associated with pen-and paper-testing and offers suggestions for an innovative and new approach to assessment for the twenty-first century. Based on the principle of the 'wise examiner' a computerised adaptive testing system which measures pupils' ability against the levels of the United Kingdom National Curriculum has been developed for use in mathematics. Using constructed response items, pupils are administered a test tailored to their ability with a reliability index of 0.99. Since the software administers maximally informative questions matched to each pupil's current ability estimate, no two pupils will receive the same set of items in the same order therefore removing opportunities for plagarism and teaching to the test. All marking is automated and a journal recording the outcome of the test and highlighting the areas of difficulty for each pupil is available for printing by the teacher. The current prototype of the system can be used on a school's network however the authors envisage a day when Examination Boards or the Qualifications and Assessment Authority (QCA) will administer Government tests from a central server to all United Kingdom schools or testing centres. Results will be issued at the time of testing and opportunities for resits will become more widespr VL - 5 ER - TY - JOUR T1 - a-stratified multistage computerized adaptive testing with b blocking JF - Applied Psychological Measurement Y1 - 2001 A1 - Chang, Hua-Hua A1 - Qian, J. A1 - Yang, Z. KW - computerized adaptive testing AB - Proposed a refinement, based on the stratification of items developed by D. Weiss (1973), of the computerized adaptive testing item selection procedure of H. Chang and Z. Ying (1999). Simulation studies using an item bank from the Graduate Record Examination show the benefits of the new procedure. (SLD) VL - 25 ER - TY - JOUR T1 - Final answer? JF - American School Board Journal Y1 - 2001 A1 - Coyle, J. KW - computerized adaptive testing AB - The Northwest Evaluation Association helped an Indiana school district develop a computerized adaptive testing system that was aligned with its curriculum and geared toward measuring individual student growth. Now the district can obtain such information from semester to semester and year to year, get immediate results, and test students on demand. (MLH) VL - 188 ER - TY - JOUR T1 - Multidimensional adaptive testing using the weighted likelihood estimation JF - Dissertation Abstracts International Section A: Humanities & Social Sciences Y1 - 2001 A1 - Tseng, F-L. KW - computerized adaptive testing AB - This study extended Warm's (1989) weighted likelihood estimation (WLE) to a multidimensional computerized adaptive test (MCAT) setting. WLE was compared with the maximum likelihood estimation (MLE), expected a posteriori (EAP), and maximum a posteriori (MAP) using a three-dimensional 3PL IRT model under a variety of computerized adaptive testing conditions. The dependent variables included bias, standard error of ability estimates (SE), square root of mean square error (RMSE), and test information. The independent variables were ability estimation methods, intercorrelation levels between dimensions, multidimensional structures, and ability combinations. Simulation results were presented in terms of descriptive statistics, such as figures and tables. In addition, inferential procedures were used to analyze bias by conceptualizing this Monte Carlo study as a statistical sampling experiment. The results of this study indicate that WLE and the other three estimation methods yield significantly more accurate ability estimates under an approximate simple test structure with one dominant dimension and several secondary dimensions. All four estimation methods, especially WLE, yield very large SEs when a three equally dominant multidimensional structure was employed. Consistent with previous findings based on unidimensional IRT model, MLE and WLE are less biased in the extreme of the ability scale; MLE and WLE yield larger SEs than the Bayesian methods; test information-based SEs underestimate actual SEs for both MLE and WLE in MCAT situations, especially at shorter test lengths; WLE reduced the bias of MLE under the approximate simple structure; test information-based SEs underestimates the actual SEs of MLE and WLE estimators in the MCAT conditions, similar to the findings of Warm (1989) in the unidimensional case. The results from the MCAT simulations did show some advantages of WLE in reducing the bias of MLE under the approximate simple structure with a fixed test length of 50 items, which was consistent with the previous research findings based on different unidimensional models. It is clear from the current results that all four methods perform very poorly when the multidimensional structures with multiple dominant factors were employed. More research efforts are urged to investigate systematically how different multidimensional structures affect the accuracy and reliability of ability estimation. Based on the simulated results in this study, there is no significant effect found on the ability estimation from the intercorrelation between dimensions. (PsycINFO Database Record (c) 2003 APA, all rights reserved). VL - 61 ER - TY - JOUR T1 - Pasado, presente y futuro de los test adaptativos informatizados: Entrevista con Isaac I. Béjar [Past, present and future of computerized adaptive testing: Interview with Isaac I. Béjar] JF - Psicothema Y1 - 2001 A1 - Tejada, R. A1 - Antonio, J. KW - computerized adaptive testing AB - En este artículo se presenta el resultado de una entrevista con Isaac I. Bejar. El Dr. Bejar es actualmente Investigador Científico Principal y Director del Centro para el Diseño de Evaluación y Sistemas de Puntuación perteneciente a la División de Investigación del Servicio de Medición Educativa (Educa - tional Testing Service, Princeton, NJ, EE.UU.). El objetivo de esta entrevista fue conversar sobre el pasado, presente y futuro de los Tests Adaptativos Informatizados. En la entrevista se recogen los inicios de los Tests Adaptativos y de los Tests Adaptativos Informatizados y últimos avances que se desarrollan en el Educational Testing Service sobre este tipo de tests (modelos generativos, isomorfos, puntuación automática de ítems de ensayo…). Se finaliza con la visión de futuro de los Tests Adaptativos Informatizados y su utilización en España.Past, present and future of Computerized Adaptive Testing: Interview with Isaac I. Bejar. In this paper the results of an interview with Isaac I. Bejar are presented. Dr. Bejar is currently Principal Research Scientist and Director of Center for Assessment Design and Scoring, in Research Division at Educational Testing Service (Princeton, NJ, U.S.A.). The aim of this interview was to review the past, present and future of the Computerized Adaptive Tests. The beginnings of the Adaptive Tests and Computerized Adaptive Tests, and the latest advances developed at the Educational Testing Service (generative response models, isomorphs, automated scoring…) are reviewed. The future of Computerized Adaptive Tests is analyzed, and its utilization in Spain commented. VL - 13 SN - 0214-9915 ER - TY - JOUR T1 - Algoritmo mixto mínima entropía-máxima información para la selección de ítems en un test adaptativo informatizado JF - Psicothema Y1 - 2000 A1 - Dorronsoro, J. R. A1 - Santa-Cruz, C. A1 - Rubio Franco, V. J. A1 - Aguado García, D. KW - computerized adaptive testing AB - El objetivo del estudio que presentamos es comparar la eficacia como estrat egia de selección de ítems de tres algo ritmos dife rentes: a) basado en máxima info rmación; b) basado en mínima entropía; y c) mixto mínima entropía en los ítems iniciales y máxima info rmación en el resto; bajo la hipótesis de que el algo ritmo mixto, puede dotar al TAI de mayor eficacia. Las simulaciones de procesos TAI se re a l i z a ron sobre un banco de 28 ítems de respuesta graduada calibrado según el modelo de Samejima, tomando como respuesta al TAI la respuesta ori ginal de los sujetos que fueron utilizados para la c a l i b ración. Los resultados iniciales mu e s t ran cómo el cri t e rio mixto es más eficaz que cualquiera de los otros dos tomados indep e n d i e n t e m e n t e. Dicha eficacia se maximiza cuando el algo ritmo de mínima entropía se re s t ri n ge a la selección de los pri m e ros ítems del TAI, ya que con las respuestas a estos pri m e ros ítems la estimación de q comienza a ser re l evante y el algo ritmo de máxima informaciónse optimiza.Item selection algo rithms in computeri zed adap t ive testing. The aim of this paper is to compare the efficacy of three different item selection algo rithms in computeri zed adap t ive testing (CAT). These algorithms are based as follows: the first one is based on Item Info rm ation, the second one on Entropy, and the last algo rithm is a mixture of the two previous ones. The CAT process was simulated using an emotional adjustment item bank. This item bank contains 28 graded items in six categories , calibrated using Samejima (1969) Graded Response Model. The initial results show that the mixed criterium algorithm performs better than the other ones. VL - 12 ER - TY - JOUR T1 - Capitalization on item calibration error in adaptive testing JF - Applied Measurement in Education Y1 - 2000 A1 - van der Linden, W. J. A1 - Glas, C. A. W. KW - computerized adaptive testing AB - (from the journal abstract) In adaptive testing, item selection is sequentially optimized during the test. Because the optimization takes place over a pool of items calibrated with estimation error, capitalization on chance is likely to occur. How serious the consequences of this phenomenon are depends not only on the distribution of the estimation errors in the pool or the conditional ratio of the test length to the pool size given ability, but may also depend on the structure of the item selection criterion used. A simulation study demonstrated a dramatic impact of capitalization on estimation errors on ability estimation. Four different strategies to minimize the likelihood of capitalization on error in computerized adaptive testing are discussed. VL - 13 N1 - References .Lawrence Erlbaum, US ER - TY - JOUR T1 - A comparison of computerized adaptive testing and multistage testing JF - Dissertation Abstracts International: Section B: the Sciences & Engineering Y1 - 2000 A1 - Patsula, L N. KW - computerized adaptive testing AB - There is considerable evidence to show that computerized-adaptive testing (CAT) and multi-stage testing (MST) are viable frameworks for testing. With many testing organizations looking to move towards CAT or MST, it is important to know what framework is superior in different situations and at what cost in terms of measurement. What was needed is a comparison of the different testing procedures under various realistic testing conditions. This dissertation addressed the important problem of the increase or decrease in accuracy of ability estimation in using MST rather than CAT. The purpose of this study was to compare the accuracy of ability estimates produced by MST and CAT while keeping some variables fixed and varying others. A simulation study was conducted to investigate the effects of several factors on the accuracy of ability estimation using different CAT and MST designs. The factors that were manipulated are the number of stages, the number of subtests per stage, and the number of items per subtest. Kept constant were test length, distribution of subtest information, method of determining cut-points on subtests, amount of overlap between subtests, and method of scoring total test. The primary question of interest was, given a fixed test length, how many stages and many subtests per stage should there be to maximize measurement precision? Furthermore, how many items should there be in each subtest? Should there be more in the routing test or should there be more in the higher stage tests? Results showed that, in general, increasing the number of stages from two to three decreased the amount of errors in ability estimation. Increasing the number of subtests from three to five increased the accuracy of ability estimates as well as the efficiency of the MST designs relative to the P&P and CAT designs at most ability levels (-.75 to 2.25). Finally, at most ability levels (-.75 to 2.25), varying the number of items per stage had little effect on either the resulting accuracy of ability estimates or the relative efficiency of the MST designs to the P&P and CAT designs. (PsycINFO Database Record (c) 2003 APA, all rights reserved). VL - 60 ER - TY - CHAP T1 - Computer-adaptive testing: A methodology whose time has come T2 - Development of Computerised Middle School Achievement Tests Y1 - 2000 A1 - Linacre, J. M. ED - Kang, U. ED - Jean, E. ED - Linacre, J. M. KW - computerized adaptive testing JF - Development of Computerised Middle School Achievement Tests PB - MESA CY - Chicago, IL. USA VL - 69 ER - TY - JOUR T1 - Computerized adaptive testing for classifying examinees into three categories JF - Educational and Psychological Measurement Y1 - 2000 A1 - Theo Eggen A1 - Straetmans, G. J. J. M. KW - computerized adaptive testing KW - Computerized classification testing AB - The objective of this study was to explore the possibilities for using computerized adaptive testing in situations in which examinees are to be classified into one of three categories.Testing algorithms with two different statistical computation procedures are described and evaluated. The first computation procedure is based on statistical testing and the other on statistical estimation. Item selection methods based on maximum information (MI) considering content and exposure control are considered. The measurement quality of the proposed testing algorithms is reported. The results of the study are that a reduction of at least 22% in the mean number of items can be expected in a computerized adaptive test (CAT) compared to an existing paper-and-pencil placement test. Furthermore, statistical testing is a promising alternative to statistical estimation. Finally, it is concluded that imposing constraints on the MI selection strategy does not negatively affect the quality of the testing algorithms VL - 60 ER - TY - JOUR T1 - Los tests adaptativos informatizados en la frontera del siglo XXI: Una revisión [Computerized adaptive tests at the turn of the 21st century: A review] JF - Metodología de las Ciencias del Comportamiento Y1 - 2000 A1 - Hontangas, P. A1 - Ponsoda, V. A1 - Olea, J. A1 - Abad, F. J. KW - computerized adaptive testing VL - 2 SN - 1575-9105 ER - TY - JOUR T1 - Taylor approximations to logistic IRT models and their use in adaptive testing JF - Journal of Educational and Behavioral Statistics Y1 - 2000 A1 - Veerkamp, W. J. J. KW - computerized adaptive testing AB - Taylor approximation can be used to generate a linear approximation to a logistic ICC and a linear ability estimator. For a specific situation it will be shown to result in a special case of a Robbins-Monro item selection procedure for adaptive testing. The linear estimator can be used for the situation of zero and perfect scores when maximum likelihood estimation fails to come up with a finite estimate. It is also possible to use this estimator to generate starting values for maximum likelihood and weighted likelihood estimation. Approximations to the expectation and variance of the linear estimator for a sequence of Robbins-Monro item selections can be determined analytically. VL - 25 ER - TY - JOUR T1 - Alternative methods for the detection of item preknowledge in computerized adaptive testing JF - Dissertation Abstracts International: Section B: the Sciences & Engineering Y1 - 1999 A1 - McLeod, Lori Davis KW - computerized adaptive testing VL - 59 ER - TY - JOUR T1 - a-stratified multistage computerized adaptive testing JF - Applied Psychological Measurement Y1 - 1999 A1 - Chang, Hua-Hua A1 - Ying, Z. KW - computerized adaptive testing AB - For computerized adaptive tests (CAT) based on the three-parameter logistic mode it was found that administering items with low discrimination parameter (a) values early in the test and administering those with high a values later was advantageous; the skewness of item exposure distributions was reduced while efficiency was maintain in trait level estimation. Thus, a new multistage adaptive testing approach is proposed that factors a into the item selection process. In this approach, the items in the item bank are stratified into a number of levels based on their a values. The early stages of a test use items with lower as and later stages use items with higher as. At each stage, items are selected according to an optimization criterion from the corresponding level. Simulation studies were performed to compare a-stratified CATs with CATs based on the Sympson-Hetter method for controlling item exposure. Results indicated that this new strategy led to tests that were well-balanced, with respect to item exposure, and efficient. The a-stratified CATs achieved a lower average exposure rate than CATs based on Bayesian or information-based item selection and the Sympson-Hetter method. (PsycINFO Database Record (c) 2003 APA, all rights reserved). VL - 23 N1 - Sage Publications, US ER - TY - CHAP T1 - CAT for certification and licensure T2 - Innovations in computerized assessment Y1 - 1999 A1 - Bergstrom, Betty A. A1 - Lunz, M. E. KW - computerized adaptive testing AB - (from the chapter) This chapter discusses implementing computerized adaptive testing (CAT) for high-stakes examinations that determine whether or not a particular candidate will be certified or licensed. The experience of several boards who have chosen to administer their licensure or certification examinations using the principles of CAT illustrates the process of moving into this mode of administration. Examples of the variety of options that can be utilized within a CAT administration are presented, the decisions that boards must make to implement CAT are discussed, and a timetable for completing the tasks that need to be accomplished is provided. In addition to the theoretical aspects of CAT, practical issues and problems are reviewed. (PsycINFO Database Record (c) 2002 APA, all rights reserved). JF - Innovations in computerized assessment PB - Lawrence Erlbaum Associates CY - Mahwah, N.J. N1 - Using Smart Source ParsingInnovations in computerized assessment. (pp. 67-91). xiv, 266pp ER - TY - JOUR T1 - Computerized Adaptive Testing: Overview and Introduction JF - Applied Psychological Measurement Y1 - 1999 A1 - Meijer, R. R. A1 - Nering, M. L. KW - computerized adaptive testing AB - Use of computerized adaptive testing (CAT) has increased substantially since it was first formulated in the 1970s. This paper provides an overview of CAT and introduces the contributions to this Special Issue. The elements of CAT discussed here include item selection procedures, estimation of the latent trait, item exposure, measurement precision, and item bank development. Some topics for future research are also presented. VL - 23 ER - TY - JOUR T1 - The effect of model misspecification on classification decisions made using a computerized test JF - Journal of Educational Measurement Y1 - 1999 A1 - Kalohn, J.C. A1 - Spray, J. A. KW - computerized adaptive testing AB - Many computerized testing algorithms require the fitting of some item response theory (IRT) model to examinees' responses to facilitate item selection, the determination of test stopping rules, and classification decisions. Some IRT models are thought to be particularly useful for small volume certification programs that wish to make the transition to computerized adaptive testing (CAT). The 1-parameter logistic model (1-PLM) is usually assumed to require a smaller sample size than the 3-parameter logistic model (3-PLM) for item parameter calibrations. This study examined the effects of model misspecification on the precision of the decisions made using the sequential probability ratio test. For this comparison, the 1-PLM was used to estimate item parameters, even though the items' characteristics were represented by a 3-PLM. Results demonstrate that the 1-PLM produced considerably more decision errors under simulation conditions similar to a real testing environment, compared to the true model and to a fixed-form standard reference set of items. (PsycINFO Database Record (c) 2003 APA, all rights reserved). VL - 36 N1 - National Council on Measurement in Education, US ER - TY - JOUR T1 - Graphical models and computerized adaptive testing JF - Applied Psychological Measurement Y1 - 1999 A1 - Almond, R. G. A1 - Mislevy, R. J. KW - computerized adaptive testing AB - Considers computerized adaptive testing from the perspective of graphical modeling (GM). GM provides methods for making inferences about multifaceted skills and knowledge and for extracting data from complex performances. Provides examples from language-proficiency assessment. (SLD) VL - 23 ER - TY - BOOK T1 - Innovations in computerized assessment Y1 - 1999 A1 - F Drasgow A1 - Olson-Buchanan, J. B. KW - computerized adaptive testing AB - Chapters in this book present the challenges and dilemmas faced by researchers as they created new computerized assessments, focusing on issues addressed in developing, scoring, and administering the assessments. Chapters are: (1) "Beyond Bells and Whistles; An Introduction to Computerized Assessment" (Julie B. Olson-Buchanan and Fritz Drasgow); (2) "The Development of a Computerized Selection System for Computer Programmers in a Financial Services Company" (Michael J. Zickar, Randall C. Overton, L. Rogers Taylor, and Harvey J. Harms); (3) "Development of the Computerized Adaptive Testing Version of the Armed Services Vocational Aptitude Battery" (Daniel O. Segall and Kathleen E. Moreno); (4) "CAT for Certification and Licensure" (Betty A. Bergstrom and Mary E. Lunz); (5) "Developing Computerized Adaptive Tests for School Children" (G. Gage Kingsbury and Ronald L. Houser); (6) "Development and Introduction of a Computer Adaptive Graduate Record Examinations General Test" (Craig N. Mills); (7) "Computer Assessment Using Visual Stimuli: A Test of Dermatological Skin Disorders" (Terry A. Ackerman, John Evans, Kwang-Seon Park, Claudia Tamassia, and Ronna Turner); (8) "Creating Computerized Adaptive Tests of Music Aptitude: Problems, Solutions, and Future Directions" (Walter P. Vispoel); (9) "Development of an Interactive Video Assessment: Trials and Tribulations" (Fritz Drasgow, Julie B. Olson-Buchanan, and Philip J. Moberg); (10) "Computerized Assessment of Skill for a Highly Technical Job" (Mary Ann Hanson, Walter C. Borman, Henry J. Mogilka, Carol Manning, and Jerry W. Hedge); (11) "Easing the Implementation of Behavioral Testing through Computerization" (Wayne A. Burroughs, Janet Murray, S. Scott Wesley, Debra R. Medina, Stacy L. Penn, Steven R. Gordon, and Michael Catello); and (12) "Blood, Sweat, and Tears: Some Final Comments on Computerized Assessment." (Fritz Drasgow and Julie B. Olson-Buchanan). Each chapter contains references. (Contains 17 tables and 21 figures.) (SLD) PB - Lawrence Erlbaum Associates, Inc. CY - Mahwah, N.J. N1 - EDRS Availability: None. Lawrence Erlbaum Associates, Inc., Publishers, 10 Industrial Avenue, Mahwah, New Jersey 07430-2262 (paperback: ISBN-0-8058-2877-X, $29.95; clothbound: ISBN-0-8058-2876-1, $59.95). Tel: 800-926-6579 (Toll Free). ER - TY - JOUR T1 - Multidimensional adaptive testing with a minimum error-variance criterion JF - Journal of Educational and Behavioral Statistics Y1 - 1999 A1 - van der Linden, W. J. KW - computerized adaptive testing AB - Adaptive testing under a multidimensional logistic response model is addressed. An algorithm is proposed that minimizes the (asymptotic) variance of the maximum-likelihood estimator of a linear combination of abilities of interest. The criterion results in a closed-form expression that is easy to evaluate. In addition, it is shown how the algorithm can be modified if the interest is in a test with a "simple ability structure". The statistical properties of the adaptive ML estimator are demonstrated for a two-dimensional item pool with several linear combinations of the abilities. VL - 24 ER - TY - JOUR T1 - Optimal design for item calibration in computerized adaptive testing JF - Dissertation Abstracts International: Section B: the Sciences & Engineering Y1 - 1999 A1 - Buyske, S. G. KW - computerized adaptive testing AB - Item Response Theory is the psychometric model used for standardized tests such as the Graduate Record Examination. A test-taker's response to an item is modelled as a binary response with success probability depending on parameters for both the test-taker and the item. Two popular models are the two-parameter logistic (2PL) model and the three-parameter logistic (3PL) model. For the 2PL model, the logit of the probability of a correct response equals ai(theta j-bi), where ai and bi are item parameters, while thetaj is the test-taker's parameter, known as "proficiency." The 3PL model adds a nonzero left asymptote to model random response behavior by low theta test-takers. Assigning scores to students requires accurate estimation of theta s, while accurate estimation of theta s requires accurate estimation of the item parameters. The operational implementation of Item Response Theory, particularly following the advent of computerized adaptive testing, generally involves handling these two estimation problems separately. This dissertation addresses the optimal design for item parameter estimation. Most current designs calibrate items with a sample drawn from the overall test-taking population. For 2PL models a sequential design based on the D-optimality criterion has been proposed, while no 3PL design is in the literature. In this dissertation, we design the calibration with the ultimate use of the items in mind, namely to estimate test-takers' proficiency parameters. For both the 2PL and 3PL models, this criterion leads to a locally L-optimal design criterion, named the Minimal Information Loss criterion. In turn, this criterion and the General Equivalence Theorem give a two point design for the 2PL model and a three point design for the 3PL model. A sequential implementation of this optimal design is presented. For the 2PL model, this design is almost 55% more efficient than the simple random sample approach, and 12% more efficient than the locally D-optimal design. For the 3PL model, the proposed design is 34% more efficient than the simple random sample approach. (PsycINFO Database Record (c) 2003 APA, all rights reserved). VL - 59 ER - TY - JOUR T1 - Using response-time constraints to control for differential speededness in computerized adaptive testing JF - Applied Psychological Measurement Y1 - 1999 A1 - van der Linden, W. J. A1 - Scrams, D. J. A1 - Schnipke, D. L. KW - computerized adaptive testing AB - An item-selection algorithm is proposed for neutralizing the differential effects of time limits on computerized adaptive test scores. The method is based on a statistical model for distributions of examinees’ response times on items in a bank that is updated each time an item is administered. Predictions from the model are used as constraints in a 0-1 linear programming model for constrained adaptive testing that maximizes the accuracy of the trait estimator. The method is demonstrated empirically using an item bank from the Armed Services Vocational Aptitude Battery. VL - 23 N1 - Sage Publications, US ER - TY - JOUR T1 - Applications of network flows to computerized adaptive testing JF - Dissertation Abstracts International: Section B: the Sciences & Engineering Y1 - 1998 A1 - Claudio, M. J. C. KW - computerized adaptive testing AB - Recently, the concept of Computerized Adaptive Testing (CAT) has been receiving ever growing attention from the academic community. This is so because of both practical and theoretical considerations. Its practical importance lies in the advantages of CAT over the traditional (perhaps outdated) paper-and-pencil test in terms of time, accuracy, and money. The theoretical interest is sparked by its natural relationship to Item Response Theory (IRT). This dissertation offers a mathematical programming approach which creates a model that generates a CAT that takes care of many questions concerning the test, such as feasibility, accuracy and time of testing, as well as item pool security. The CAT generated is designed to obtain the most information about a single test taker. Several methods for eatimating the examinee's ability, based on the (dichotomous) responses to the items in the test, are also offered here. (PsycINFO Database Record (c) 2003 APA, all rights reserved). VL - 59 ER - TY - JOUR T1 - Maintaining content validity in computerized adaptive testing JF - Advances in Health Sciences Education Y1 - 1998 A1 - Luecht, RM A1 - de Champlain, A. A1 - Nungester, R. J. KW - computerized adaptive testing AB - The authors empirically demonstrate some of the trade-offs which can occur when content balancing is imposed in computerized adaptive testing (CAT) forms or conversely, when it is ignored. The authors contend that the content validity of a CAT form can actually change across a score scale when content balancing is ignored. However they caution that, efficiency and score precision can be severely reduced by over specifying content restrictions in a CAT form. The results from 2 simulation studies are presented as a means of highlighting some of the trade-offs that could occur between content and statistical considerations in CAT form assembly. (PsycINFO Database Record (c) 2003 APA, all rights reserved). VL - 3 N1 - Kluwer Academic Publishers, Netherlands ER - TY - JOUR T1 - A model for optimal constrained adaptive testing JF - Applied Psychological Measurement Y1 - 1998 A1 - van der Linden, W. J. A1 - Reese, L. M. KW - computerized adaptive testing AB - A model for constrained computerized adaptive testing is proposed in which the information in the test at the trait level (0) estimate is maximized subject to a number of possible constraints on the content of the test. At each item-selection step, a full test is assembled to have maximum information at the current 0 estimate, fixing the items already administered. Then the item with maximum in-formation is selected. All test assembly is optimal because a linear programming (LP) model is used that automatically updates to allow for the attributes of the items already administered and the new value of the 0 estimator. The LP model also guarantees that each adaptive test always meets the entire set of constraints. A simulation study using a bank of 753 items from the Law School Admission Test showed that the 0 estimator for adaptive tests of realistic lengths did not suffer any loss of efficiency from the presence of 433 constraints on the item selection process. VL - 22 N1 - Sage Publications, US ER - TY - JOUR T1 - Simulating the use of disclosed items in computerized adaptive testing JF - Journal of Educational Measurement Y1 - 1998 A1 - Stocking, M. L. A1 - W. C. Ward A1 - Potenza, M. T. KW - computerized adaptive testing AB - Regular use of questions previously made available to the public (i.e., disclosed items) may provide one way to meet the requirement for large numbers of questions in a continuous testing environment, that is, an environment in which testing is offered at test taker convenience throughout the year rather than on a few prespecified test dates. First it must be shown that such use has effects on test scores small enough to be acceptable. In this study simulations are used to explore the use of disclosed items under a worst-case scenario which assumes that disclosed items are always answered correctly. Some item pool and test designs were identified in which the use of disclosed items produces effects on test scores that may be viewed as negligible. VL - 35 N1 - National Council on Measurement in Education, US ER - TY - JOUR T1 - A comparison of maximum likelihood estimation and expected a posteriori estimation in computerized adaptive testing using the generalized partial credit model JF - Dissertation Abstracts International: Section B: the Sciences & Engineering Y1 - 1997 A1 - Chen, S-K. KW - computerized adaptive testing AB - A simulation study was conducted to investigate the application of expected a posteriori (EAP) trait estimation in computerized adaptive tests (CAT) based on the generalized partial credit model (Muraki, 1992), and to compare the performance of EAP with maximum likelihood trait estimation (MLE). The performance of EAP was evaluated under different conditions: the number of quadrature points (10, 20, and 30), and the type of prior distribution (normal, uniform, negatively skewed, and positively skewed). The relative performance of the MLE and EAP estimation methods were assessed under two distributional forms of the latent trait, one normal and the other negatively skewed. Also, both the known item parameters and estimated item parameters were employed in the simulation study. Descriptive statistics, correlations, scattergrams, accuracy indices, and audit trails were used to compare the different methods of trait estimation in CAT. The results showed that, regardless of the latent trait distribution, MLE and EAP with a normal prior, a uniform prior, or the prior that matches the latent trait distribution using either 20 or 30 quadrature points provided relatively accurate estimation in CAT based on the generalized partial credit model. However, EAP using only 10 quadrature points did not work well in the generalized partial credit CAT. Also, the study found that increasing the number of quadrature points from 20 to 30 did not increase the accuracy of EAP estimation. Therefore, it appears 20 or more quadrature points are sufficient for accurate EAP estimation. The results also showed that EAP with a negatively skewed prior and positively skewed prior performed poorly for the normal data set, and EAP with positively skewed prior did not provide accurate estimates for the negatively skewed data set. Furthermore, trait estimation in CAT using estimated item parameters produced results similar to those obtained using known item parameters. In general, when at least 20 quadrature points are used, EAP estimation with a normal prior, a uniform prior or the prior that matches the latent trait distribution appears to be a good alternative to MLE in the application of polytomous CAT based on the generalized partial credit model. (PsycINFO Database Record (c) 2003 APA, all rights reserved). VL - 58 ER - TY - BOOK T1 - Computerized adaptive testing: From inquiry to operation Y1 - 1997 A1 - Sands, W. A. A1 - B. K. Waters A1 - J. R. McBride KW - computerized adaptive testing AB - (from the cover) This book traces the development of computerized adaptive testing (CAT) from its origins in the 1960s to its integration with the Armed Services Vocational Aptitude Battery (ASVAB) in the 1990s. A paper-and-pencil version of the battery (P&P-ASVAB) has been used by the Defense Department since the 1970s to measure the abilities of applicants for military service. The test scores are used both for initial qualification and for classification into entry-level training opportunities. /// This volume provides the developmental history of the CAT-ASVAB through its various stages in the Joint-Service arena. Although the majority of the book concerns the myriad technical issues that were identified and resolved, information is provided on various political and funding support challenges that were successfully overcome in developing, testing, and implementing the battery into one of the nation's largest testing programs. The book provides useful information to professionals in the testing community and everyone interested in personnel assessment and evaluation. (PsycINFO Database Record (c) 2004 APA, all rights reserved). PB - American Psychological Association CY - Washington, D.C., USA N1 - References .Using Smart Source Parsingxvii, pp ER - TY - JOUR T1 - The effect of population distribution and method of theta estimation on computerized adaptive testing (CAT) using the rating scale model JF - Educational & Psychological Measurement Y1 - 1997 A1 - Chen, S-K. A1 - Hou, L. Y. A1 - Fitzpatrick, S. J. A1 - Dodd, B. G. KW - computerized adaptive testing AB - Investigated the effect of population distribution on maximum likelihood estimation (MLE) and expected a posteriori estimation (EAP) in a simulation study of computerized adaptive testing (CAT) based on D. Andrich's (1978) rating scale model. Comparisons were made among MLE and EAP with a normal prior distribution and EAP with a uniform prior distribution within 2 data sets: one generated using a normal trait distribution and the other using a negatively skewed trait distribution. Descriptive statistics, correlations, scattergrams, and accuracy indices were used to compare the different methods of trait estimation. The EAP estimation with a normal prior or uniform prior yielded results similar to those obtained with MLE, even though the prior did not match the underlying trait distribution. An additional simulation study based on real data suggested that more work is needed to determine the optimal number of quadrature points for EAP in CAT based on the rating scale model. The choice between MLE and EAP for particular measurement situations is discussed. (PsycINFO Database Record (c) 2003 APA, all rights reserved). VL - 57 N1 - Sage Publications, US ER - TY - CHAP T1 - Research antecedents of applied adaptive testing T2 - Computerized adaptive testing: From inquiry to practice Y1 - 1997 A1 - J. R. McBride ED - B. K. Waters ED - J. R. McBride KW - computerized adaptive testing AB - (from the chapter) This chapter sets the stage for the entire computerized adaptive testing Armed Services Vocational Aptitude Battery (CAT-ASVAB) development program by describing the state of the art immediately preceding its inception. By the mid-l970s, a great deal of research had been conducted that provided the technical underpinnings needed to develop adaptive tests, but little research had been done to corroborate empirically the promising results of theoretical analyses and computer simulation studies. In this chapter, the author summarizes much of the important theoretical and simulation research prior to 1977. In doing so, he describes a variety of approaches to adaptive testing, and shows that while many methods for adaptive testing had been proposed, few practical attempts had been made to implement it. Furthermore, the few instances of adaptive testing were based primarily on traditional test theory, and were developed in laboratory settings for purposes of basic research. The most promising approaches, those based on item response theory and evaluated analytically or by means of computer simulations, remained to be proven in the crucible of live testing. (PsycINFO Database Record (c) 2004 APA, all rights reserved). JF - Computerized adaptive testing: From inquiry to practice PB - American Psychological Association CY - Washington D.C. USA ER - TY - JOUR T1 - Revising item responses in computerized adaptive tests: A comparison of three models JF - Applied Psychological Measurement Y1 - 1997 A1 - Stocking, M. L. KW - computerized adaptive testing AB - Interest in the application of large-scale computerized adaptive testing has focused attention on issues that arise when theoretical advances are made operational. One such issue is that of the order in which exaniinees address questions within a test or separately timed test section. In linear testing, this order is entirely under the control of the examinee, who can look ahead at questions and return and revise answers to questions. Using simulation, this study investigated three models that permit restricted examinee control over revising previous answers in the context of adaptive testing. Even under a worstcase model of examinee revision behavior, two of the models of permitting item revisions worked well in preserving test fairness and accuracy. One model studied may also preserve some cognitive processing styles developed by examinees for a linear testing environment. VL - 21 N1 - Sage Publications, US ER - TY - CONF T1 - Validation of CATSIB To investigate DIF of CAT data T2 - annual meeting of the American Educational Research Association Y1 - 1997 A1 - Nandakumar, R. A1 - Roussos, L. A. KW - computerized adaptive testing AB - This paper investigates the performance of CATSIB (a modified version of the SIBTEST computer program) to assess differential item functioning (DIF) in the context of computerized adaptive testing (CAT). One of the distinguishing features of CATSIB is its theoretically built-in regression correction to control for the Type I error rates when the distributions of the reference and focal groups differ on the intended ability. This phenomenon is also called impact. The Type I error rate of CATSIB with the regression correction (WRC) was compared with that of CATSIB without the regression correction (WORC) to see if the regression correction was indeed effective. Also of interest was the power level of CATSIB after the regression correction. The subtest size was set at 25 items, and sample size, the impact level, and the amount of DIF were varied. Results show that the regression correction was very useful in controlling for the Type I error, CATSIB WORC had inflated observed Type I errors, especially when impact levels were high. The CATSIB WRC had observed Type I error rates very close to the nominal level of 0.05. The power rates of CATSIB WRC were impressive. As expected, the power increased as the sample size increased and as the amount of DIF increased. Even for small samples with high impact rates, power rates were 64% or higher for high DIF levels. For large samples, power rates were over 90% for high DIF levels. (Contains 12 tables and 7 references.) (Author/SLD) JF - annual meeting of the American Educational Research Association CY - Chicago, IL. USA ER - TY - CONF T1 - A comparison of the traditional maximum information method and the global information method in CAT item selection T2 - annual meeting of the National Council on Measurement in Education Y1 - 1996 A1 - Tang, K. L. KW - computerized adaptive testing KW - item selection JF - annual meeting of the National Council on Measurement in Education CY - New York, NY USA ER - TY - JOUR T1 - Dynamic scaling: An ipsative procedure using techniques from computer adaptive testing JF - Dissertation Abstracts International: Section B: the Sciences & Engineering Y1 - 1996 A1 - Berg, S. R. KW - computerized adaptive testing AB - The purpose of this study was to create a prototype method for scaling items using computer adaptive testing techniques and to demonstrate the method with a working model program. The method can be used to scale items, rank individuals with respect to the scaled items, and to re-scale the items with respect to the individuals' responses. When using this prototype method, the items to be scaled are part of a database that contains not only the items, but measures of how individuals respond to each item. After completion of all presented items, the individual is assigned an overall scale value which is then compared with each item responded to, and an individual "error" term is stored with each item. After several individuals have responded to the items, the item error terms are used to revise the placement of the scaled items. This revision feature allows the natural adaptation of one general list to reflect subgroup differences, for example, differences among geographic areas or ethnic groups. It also provides easy revision and limited authoring of the scale items by the computer program administrator. This study addressed the methodology, the instrumentation needed to handle the scale-item administration, data recording, item error analysis, and scale-item database editing required by the method, and the behavior of a prototype vocabulary test in use. Analyses were made of item ordering, response profiles, item stability, reliability and validity. Although slow, the movement of unordered words used as items in the prototype program was accurate as determined by comparison with an expert word ranking. Person scores obtained by multiple administrations of the prototype test were reliable and correlated at.94 with a commercial paper-and-pencil vocabulary test, while holding a three-to-one speed advantage in administration. Although based upon self-report data, dynamic scaling instruments like the model vocabulary test could be very useful for self-assessment, for pre (PsycINFO Database Record (c) 2003 APA, all rights reserved). VL - 56 ER - TY - JOUR T1 - The effect of individual differences variables on the assessment of ability for Computerized Adaptive Testing JF - Dissertation Abstracts International: Section B: the Sciences & Engineering Y1 - 1996 A1 - Gershon, R. C. KW - computerized adaptive testing AB - Computerized Adaptive Testing (CAT) continues to gain momentum as the accepted testing modality for a growing number of certification, licensure, education, government and human resource applications. However, the developers of these tests have for the most part failed to adequately explore the impact of individual differences such as test anxiety on the adaptive testing process. It is widely accepted that non-cognitive individual differences variables interact with the assessment of ability when using written examinations. Logic would dictate that individual differences variables would equally affect CAT. Two studies were used to explore this premise. In the first study, 507 examinees were given a test anxiety survey prior to taking a high stakes certification exam using CAT or using a written format. All examinees had already completed their course of study, and the examination would be their last hurdle prior to being awarded certification. High test anxious examinees performed worse than their low anxious counterparts on both testing formats. The second study replicated the finding that anxiety depresses performance in CAT. It also addressed the differential effect of anxiety on within test performance. Examinees were candidates taking their final certification examination following a four year college program. Ability measures were calculated for each successive part of the test for 923 subjects. Within subject performance varied depending upon test position. High anxious examinees performed poorly at all points in the test, while low and medium anxious examinee performance peaked in the middle of the test. If test anxiety and performance measures were actually the same trait, then low anxious individuals should have performed equally well throughout the test. The observed interaction of test anxiety and time on task serves as strong evidence that test anxiety has motivationally mediated as well as cognitively mediated effects. The results of the studies are di (PsycINFO Database Record (c) 2003 APA, all rights reserved). VL - 57 ER - TY - JOUR T1 - Multidimensional computerized adaptive testing in a certification or licensure context JF - Applied Psychological Measurement Y1 - 1996 A1 - Luecht, RM KW - computerized adaptive testing AB - (from the journal abstract) Multidimensional item response theory (MIRT) computerized adaptive testing, building on a recent work by D. O. Segall (1996), is applied in a licensing/certification context. An example of a medical licensure test is used to demonstrate situations in which complex, integrated content must be balanced at the total test level for validity reasons, but items assigned to reportable subscore categories may be used under a MIRT adaptive paradigm to improve the reliability of the subscores. A heuristic optimization framework is outlined that generalizes to both univariate and multivariate statistical objective functions, with additional systems of constraints included to manage the content balancing or other test specifications on adaptively constructed test forms. Simulation results suggested that a multivariate treatment of the problem, although complicating somewhat the objective function used and the estimation of traits, nonetheless produces advantages from a psychometric perspective. (PsycINFO Database Record (c) 2003 APA, all rights reserved). VL - 20 N1 - Sage Publications, US ER - TY - JOUR T1 - Assessment of scaled score consistency in adaptive testing from a multidimensional item response theory perspective JF - Dissertation Abstracts International: Section B: the Sciences & Engineering Y1 - 1995 A1 - Fan, Miechu KW - computerized adaptive testing AB - The purpose of this study was twofold: (a) to examine whether the unidimensional adaptive testing estimates are comparable for different ability levels of examinees when the true examinee-item interaction is correctly modeled using a compensatory multidimensional item response theory (MIRT) model; and (b) to investigate the effects of adaptive testing estimation when the procedure of item selection of computerized adaptive testing (CAT) is controlled by either content-balancing or selecting the most informative item in a user specified direction at the current estimate of unidimensional ability. A series of Monte Carlo simulations were conducted in this study. Deviation from the reference composite angle was used as an index of the theta1,theta2-composite consistency across the different levels of unidimensional CAT estimates. In addition, the effect of the content-balancing item selection procedure and the fixed-direction item selection procedure were compared across the different ability levels. The characteristics of item selection, test information and the relationship between unidimensional and multidimensional models were also investigated. In addition to employing statistical analysis to examine the robustness of the CAT procedure violations of unidimensionality, this research also included graphical analyses to present the results. The results were summarized as follows: (a) the reference angles for the no-control-item-selection method were disparate across the unidimensional ability groups; (b) the unidimensional CAT estimates from the content-balancing item selection method did not offer much improvement; (c) the fixed-direction-item selection method did provide greater consistency for the unidimensional CAT estimates across the different levels of ability; (d) and, increasing the CAT test length did not provide greater score scale consistency. Based on the results of this study, the following conclusions were drawn: (a) without any controlling (PsycINFO Database Record (c) 2003 APA, all rights reserved). VL - 55 ER - TY - CHAP T1 - The equivalence of Rasch item calibrations and ability estimates across modes of administration T2 - Objective measurement: Theory into practice Y1 - 1994 A1 - Bergstrom, Betty A. A1 - Lunz, M. E. KW - computerized adaptive testing JF - Objective measurement: Theory into practice PB - Ablex Publishing Co. CY - Norwood, N.J. USA VL - 2 ER - TY - JOUR T1 - Monte Carlo simulation comparison of two-stage testing and computerized adaptive testing JF - Dissertation Abstracts International Section A: Humanities & Social Sciences Y1 - 1994 A1 - Kim, H-O. KW - computerized adaptive testing VL - 54 ER - TY - JOUR T1 - An application of Computerized Adaptive Testing to the Test of English as a Foreign Language JF - Dissertation Abstracts International Y1 - 1993 A1 - Moon, O. KW - computerized adaptive testing VL - 53 ER - TY - JOUR T1 - Assessing the utility of item response models: computerized adaptive testing JF - Educational Measurement: Issues and Practice Y1 - 1993 A1 - Kingsbury, G. G. A1 - Houser, R.L. KW - computerized adaptive testing VL - 12 ER - TY - JOUR T1 - Comparability and validity of computerized adaptive testing with the MMPI-2 JF - Dissertation Abstracts International Y1 - 1993 A1 - Roper, B. L. KW - computerized adaptive testing VL - 53 ER - TY - JOUR T1 - Computer adaptive testing: A comparison of four item selection strategies when used with the golden section search strategy for estimating ability JF - Dissertation Abstracts International Y1 - 1993 A1 - Carlson, R. D. KW - computerized adaptive testing VL - 54 ER - TY - JOUR T1 - Altering the level of difficulty in computer adaptive testing JF - Applied Measurement in Education Y1 - 1992 A1 - Bergstrom, Betty A. A1 - Lunz, M. E. A1 - Gershon, R. C. KW - computerized adaptive testing AB - Examines the effect of altering test difficulty on examinee ability measures and test length in a computer adaptive test. The 225 Ss were randomly assigned to 3 test difficulty conditions and given a variable length computer adaptive test. Examinees in the hard, medium, and easy test condition took a test targeted at the 50%, 60%, or 70% probability of correct response. The results show that altering the probability of a correct response does not affect estimation of examinee ability and that taking an easier computer adaptive test only slightly increases the number of items necessary to reach specified levels of precision. (PsycINFO Database Record (c) 2002 APA, all rights reserved). VL - 5 N1 - Lawrence Erlbaum, US ER - TY - JOUR T1 - The development and evaluation of a system for computerized adaptive testing JF - Dissertation Abstracts International Y1 - 1992 A1 - de la Torre Sanchez, R. KW - computerized adaptive testing VL - 52 ER - TY - JOUR T1 - Test anxiety and test performance under computerized adaptive testing methods JF - Dissertation Abstracts International Y1 - 1992 A1 - Powell, Zen-Hsiu E. KW - computerized adaptive testing VL - 52 ER - TY - JOUR T1 - A comparison of paper-and-pencil, computer-administered, computerized feedback, and computerized adaptive testing methods for classroom achievement testing JF - Dissertation Abstracts International Y1 - 1991 A1 - Kuan, Tsung Hao KW - computerized adaptive testing VL - 52 ER - TY - JOUR T1 - Inter-subtest branching in computerized adaptive testing JF - Dissertation Abstracts International Y1 - 1991 A1 - Chang, S-H. KW - computerized adaptive testing VL - 52 ER - TY - ABST T1 - Patterns of alcohol and drug use among federal offenders as assessed by the Computerized Lifestyle Screening Instrument Y1 - 1991 A1 - Robinson, D. A1 - Porporino, F. J. A1 - Millson, W. A. KW - computerized adaptive testing KW - drug abuse KW - substance use PB - Research and Statistics Branch, Correctional Service of Canada CY - Ottawa, ON. Canada SN - R-11 ER - TY - JOUR T1 - A simulation and comparison of flexilevel and Bayesian computerized adaptive testing JF - Journal of Educational Measurement Y1 - 1990 A1 - De Ayala, R. J., A1 - Dodd, B. G. A1 - Koch, W. R. KW - computerized adaptive testing AB - Computerized adaptive testing (CAT) is a testing procedure that adapts an examination to an examinee's ability by administering only items of appropriate difficulty for the examinee. In this study, the authors compared Lord's flexilevel testing procedure (flexilevel CAT) with an item response theory-based CAT using Bayesian estimation of ability (Bayesian CAT). Three flexilevel CATs, which differed in test length (36, 18, and 11 items), and three Bayesian CATs were simulated; the Bayesian CATs differed from one another in the standard error of estimate (SEE) used for terminating the test (0.25, 0.10, and 0.05). Results showed that the flexilevel 36- and 18-item CATs produced ability estimates that may be considered as accurate as those of the Bayesian CAT with SEE = 0.10 and comparable to the Bayesian CAT with SEE = 0.05. The authors discuss the implications for classroom testing and for item response theory-based CAT. VL - 27 ER - TY - JOUR T1 - Adaptive testing: The evolution of a good idea JF - Educational Measurement: Issues and Practice Y1 - 1989 A1 - Reckase, M. D. KW - computerized adaptive testing VL - 8 SN - 1745-3992 ER - TY - JOUR T1 - Application of computerized adaptive testing to the University Entrance Exam of Taiwan, R.O.C JF - Dissertation Abstracts International Y1 - 1989 A1 - Hung, P-H. KW - computerized adaptive testing VL - 49 ER - TY - THES T1 - An applied study on computerized adaptive testing T2 - Faculty of Behavioural and Social Sciences Y1 - 1989 A1 - Schoonman, W. KW - computerized adaptive testing AB - (from the cover) The rapid development and falling prices of powerful personal computers, in combination with new test theories, will have a large impact on psychological testing. One of the new possibilities is computerized adaptive testing. During the test administration each item is chosen to be appropriate for the person being tested. The test becomes tailor-made, resolving some of the problems with classical paper-and-pencil tests. In this way individual differences can be measured with higher efficiency and reliability. Scores on other meaningful variables, such as response time, can be obtained easily using computers. /// In this book a study on computerized adaptive testing is described. The study took place at Dutch Railways in an applied setting and served practical goals. Topics discussed include the construction of computerized tests, the use of response time, the choice of algorithms and the implications of using a latent trait model. After running a number of simulations and calibrating the item banks, an experiment was carried out. In the experiment a pretest was administered to a sample of over 300 applicants, followed by an adaptive test. In addition, a survey concerning the attitudes of testees towards computerized testing formed part of the design. JF - Faculty of Behavioural and Social Sciences PB - University of Groingen CY - Groningen, The Netherlands ER - TY - JOUR T1 - A real-data simulation of computerized adaptive administration of the MMPI JF - Psychological Assessment Y1 - 1989 A1 - Ben-Porath, Y. S. A1 - Slutske, W. S. A1 - Butcher, J. N. KW - computerized adaptive testing AB - A real-data simulation of computerized adaptive administration of the MMPI was conducted with data obtained from two personnel-selection samples and two clinical samples. A modification of the countdown method was tested to determine the usefulness, in terms of item administration savings, of several different test administration procedures. Substantial item administration savings were achieved for all four samples, though the clinical samples required administration of more items to achieve accurate classification and/or full-scale scores than did the personnel-selection samples. The use of normative item endorsement frequencies was found to be as effective as sample-specific frequencies for the determination of item administration order. The role of computerized adaptive testing in the future of personality assessment is discussed., (C) 1989 by the American Psychological Association VL - 1 N1 - Article ER - TY - JOUR T1 - Computerized adaptive testing: A comparison of the nominal response model and the three parameter model JF - Dissertation Abstracts International Y1 - 1988 A1 - De Ayala, R. J., KW - computerized adaptive testing VL - 48 ER - TY - RPRT T1 - The effect of item parameter estimation error on decisions made using the sequential probability ratio test Y1 - 1987 A1 - Spray, J. A. A1 - Reckase, M. D. KW - computerized adaptive testing KW - Sequential probability ratio test JF - ACT Research Report Series PB - DTIC Document CY - Iowa City, IA. USA ER - TY - JOUR T1 - An application of computer adaptive testing with communication handicapped examinees JF - Educational and Psychological Measurement Y1 - 1986 A1 - Garrison, W. M. A1 - Baumgarten, B. S. KW - computerized adaptive testing AB - This study was conducted to evaluate a computerized adaptive testing procedure for the measurement of mathematical skills of entry level deaf college students. The theoretical basis of the study was the Rasch model for person measurement. Sixty persons were tested using an Apple II Plus microcomputer. Ability estimates provided by the computerized procedure were compared for stability with those obtained six to eight weeks earlier from conventional (written) testing of the same subject matter. Students' attitudes toward their testing experiences also were measured. Substantial increases in measurement efficiency (by reducing test length) were realized through the adaptive testing procedure. Because the item pool used was not specifically designed for adaptive testing purposes, the psychometric quality of measurements resulting from the different testing methods was approximately equal. Attitudes toward computerized testing were favorable. VL - 46 SN - 0013-1644 N1 - Using Smart Source Parsingno. pp. MarchJournal Article10.1177/0013164486461003 ER - TY - JOUR T1 - Adaptive self-referenced testing as a procedure for the measurement of individual change due to instruction: A comparison of the reliabilities of change estimates obtained from conventional and adaptive testing procedures JF - Dissertation Abstracts International Y1 - 1985 A1 - Kingsbury, G. G. KW - computerized adaptive testing VL - 45 ER - TY - JOUR T1 - Relationship between corresponding Armed Services Vocational Aptitude Battery (ASVAB) and computerized adaptive testing (CAT) subtests JF - Applied Psychological Measurement Y1 - 1984 A1 - Moreno, K. E. A1 - Wetzel, C. D. A1 - J. R. McBride A1 - Weiss, D. J. KW - computerized adaptive testing AB - Investigated the relationships between selected subtests from the Armed Services Vocational Aptitude Battery (ASVAB) and corresponding subtests administered as computerized adaptive tests (CATs), using 270 17-26 yr old Marine recruits as Ss. Ss were administered the ASVAB before enlisting and approximately 2 wks after entering active duty, and the CAT tests were administered to Ss approximately 24 hrs after arriving at the recruit depot. Results indicate that 3 adaptive subtests correlated as well with ASVAB as did the 2nd administration of the ASVAB, although CAT subtests contained only half the number of items. Factor analysis showed CAT subtests to load on the same factors as the corresponding ASVAB subtests, indicating that the same abilities were being measured. It is concluded that CAT can achieve the same measurement precision as a conventional test, with half the number of items. (16 ref) VL - 8 N1 - Sage Publications, US ER - TY - JOUR T1 - Technical guidelines for assessing computerized adaptive tests JF - Journal of Educational Measurement Y1 - 1984 A1 - Green, B. F. A1 - Bock, R. D. A1 - Humphreys, L. G. A1 - Linn, R. L. A1 - Reckase, M. D. KW - computerized adaptive testing KW - Mode effects KW - paper-and-pencil VL - 21 SN - 1745-3984 ER - TY - JOUR T1 - Ability measurement, test bias reduction, and psychological reactions to testing as a function of computer adaptive testing versus conventional testing JF - Dissertation Abstracts International Y1 - 1982 A1 - Orban, J. A. KW - computerized adaptive testing VL - 42 ER -