01308nas a2200133 4500008003900000245003900039210003500078300001000113490000700123520095700130100001901087700002301106856004501129 2020 d00aA Blocked-CAT Procedure for CD-CAT0 aBlockedCAT Procedure for CDCAT a49-640 v443 aThis article introduces a blocked-design procedure for cognitive diagnosis computerized adaptive testing (CD-CAT), which allows examinees to review items and change their answers during test administration. Four blocking versions of the new procedure were proposed. In addition, the impact of several factors, namely, item quality, generating model, block size, and test length, on the classification rates was investigated. Three popular item selection indices in CD-CAT were used and their efficiency compared using the new procedure. An additional study was carried out to examine the potential benefit of item review. The results showed that the new procedure is promising in that allowing item review resulted only in a small loss in attribute classification accuracy under some conditions. Moreover, using a blocked-design CD-CAT is beneficial to the extent that it alleviates the negative impact of test anxiety on examinees’ true performance.1 aKaplan, Mehmet1 ade la Torre, Jimmy uhttps://doi.org/10.1177/014662161983550002100nas a2200181 4500008004100000245004600041210004600087260005500133520153800188653002501726653000801751100002801759700001901787700001301806700002001819700001301839856006601852 2017 eng d00aBayesian Perspectives on Adaptive Testing0 aBayesian Perspectives on Adaptive Testing aNiigata, JapanbNiigata Seiryo Universityc08/20173 a
Although adaptive testing is usually treated from the perspective of maximum-likelihood parameter estimation and maximum-informaton item selection, a Bayesian pespective is more natural, statistically efficient, and computationally tractable. This observation not only holds for the core process of ability estimation but includes such processes as item calibration, and real-time monitoring of item security as well. Key elements of the approach are parametric modeling of each relevant process, updating of the parameter estimates after the arrival of each new response, and optimal design of the next step.
The purpose of the symposium is to illustrates the role of Bayesian statistics in this approach. The first presentation discusses a basic Bayesian algorithm for the sequential update of any parameter in adaptive testing and illustrates the idea of Bayesian optimal design for the two processes of ability estimation and online item calibration. The second presentation generalizes the ideas to the case of 62 IACAT 2017 ABSTRACTS BOOKLET adaptive testing with polytomous items. The third presentation uses the fundamental Bayesian idea of sampling from updated posterior predictive distributions (“multiple imputations”) to deal with the problem of scoring incomplete adaptive tests.
10aBayesian Perspective10aCAT1 avan der Linden, Wim, J.1 aJiang, Bingnan1 aRen, Hao1 aChoi, Seung, W.1 aDiao, Qi uhttp://www.iacat.org/bayesian-perspectives-adaptive-testing-001177nas a2200121 4500008003900000245007200039210006900111300000900180490000700189520078000196100002800976856005101004 2016 d00aBayesian Networks in Educational Assessment: The State of the Field0 aBayesian Networks in Educational Assessment The State of the Fie a3-210 v403 aBayesian networks (BN) provide a convenient and intuitive framework for specifying complex joint probability distributions and are thus well suited for modeling content domains of educational assessments at a diagnostic level. BN have been used extensively in the artificial intelligence community as student models for intelligent tutoring systems (ITS) but have received less attention among psychometricians. This critical review outlines the existing research on BN in educational assessment, providing an introduction to the ITS literature for the psychometric community, and points out several promising research paths. The online appendix lists 40 assessment systems that serve as empirical examples of the use of BN for educational assessment in a variety of domains.1 aCulbertson, Michael, J. uhttp://apm.sagepub.com/content/40/1/3.abstract01735nas a2200133 4500008003900000245009100039210006900130300001200199490000700211520129100218100001801509700002101527856005301548 2015 d00aBest Design for Multidimensional Computerized Adaptive Testing With the Bifactor Model0 aBest Design for Multidimensional Computerized Adaptive Testing W a954-9780 v753 aMost computerized adaptive tests (CATs) have been studied using the framework of unidimensional item response theory. However, many psychological variables are multidimensional and might benefit from using a multidimensional approach to CATs. This study investigated the accuracy, fidelity, and efficiency of a fully multidimensional CAT algorithm (MCAT) with a bifactor model using simulated data. Four item selection methods in MCAT were examined for three bifactor pattern designs using two multidimensional item response theory models. To compare MCAT item selection and estimation methods, a fixed test length was used. The Ds-optimality item selection improved θ estimates with respect to a general factor, and either D- or A-optimality improved estimates of the group factors in three bifactor pattern designs under two multidimensional item response theory models. The MCAT model without a guessing parameter functioned better than the MCAT model with a guessing parameter. The MAP (maximum a posteriori) estimation method provided more accurate θ estimates than the EAP (expected a posteriori) method under most conditions, and MAP showed lower observed standard errors than EAP under most conditions, except for a general factor condition using Ds-optimality item selection.1 aSeo, Dong, Gi1 aWeiss, David, J. uhttp://epm.sagepub.com/content/75/6/954.abstract01744nas a2200145 4500008003900000245009400039210006900133300001200202490000700214520125900221100001901480700002501499700002101524856005301545 2012 d00aBalancing Flexible Constraints and Measurement Precision in Computerized Adaptive Testing0 aBalancing Flexible Constraints and Measurement Precision in Comp a629-6480 v723 a
Managing test specifications—both multiple nonstatistical constraints and flexibly defined constraints—has become an important part of designing item selection procedures for computerized adaptive tests (CATs) in achievement testing. This study compared the effectiveness of three procedures: constrained CAT, flexible modified constrained CAT, and the weighted penalty model in balancing multiple flexible constraints and maximizing measurement precision in a fixed-length CAT. The study also addressed the effect of two different test lengths—25 items and 50 items—and of including or excluding the randomesque item exposure control procedure with the three methods, all of which were found effective in selecting items that met flexible test constraints when used in the item selection process for longer tests. When the randomesque method was included to control for item exposure, the weighted penalty model and the flexible modified constrained CAT models performed better than did the constrained CAT procedure in maintaining measurement precision. When no item exposure control method was used in the item selection process, no practical difference was found in the measurement precision of each balancing method.
1 aMoyer, Eric, L1 aGalindo, Jennifer, L1 aDodd, Barbara, G uhttp://epm.sagepub.com/content/72/4/629.abstract02030nas a2200133 4500008003900000245007700039210006900116250001000185300000900195490001100204520156600215100001401781856010101795 2011 d00aBetter Data From Better Measurements Using Computerized Adaptive Testing0 aBetter Data From Better Measurements Using Computerized Adaptive aNo. 1 a1-270 vVol. 23 aThe process of constructing a fixed-length conventional test frequently focuses on maximizing internal consistency reliability by selecting test items that are of average difficulty and high discrimination (a ―peaked‖ test). The effect of constructing such a test, when viewed from the perspective of item response theory, is test scores that are precise for examinees whose trait levels are near the point at which the test is peaked; as examinee trait levels deviate from the mean, the precision of their scores decreases substantially. Results of a small simulation study demonstrate that when peaked tests are ―off target‖ for an examinee, their scores are biased and have spuriously high standard deviations, reflecting substantial amounts of error. These errors can reduce the correlations of these kinds of scores with other variables and adversely affect the results of standard statistical tests. By contrast, scores from adaptive tests are essentially unbiased and have standard deviations that are much closer to true values. Basic concepts of adaptive testing are introduced and fully adaptive computerized tests (CATs) based on IRT are described. Several examples of response records from CATs are discussed to illustrate how CATs function. Some operational issues, including item exposure, content balancing, and enemy items are also briefly discussed. It is concluded that because CAT constructs a unique test for examinee, scores from CATs will be more precise and should provide better data for social science research and applications.1 aWeiss, DJ uhttp://www.iacat.org/content/better-data-better-measurements-using-computerized-adaptive-testing00494nas a2200121 4500008004100000245009500041210006900136653001800205653000800223653000900231100001900240856011300259 2011 eng d00aBuilding Affordable CD-CAT Systems for Schools To Address Today's Challenges In Assessment0 aBuilding Affordable CDCAT Systems for Schools To Address Todays 10aaffordability10aCAT10acost1 aChang, Hua-Hua uhttp://www.iacat.org/content/building-affordable-cd-cat-systems-schools-address-todays-challenges-assessment01384nas a2200133 4500008004100000245006000041210006000101300001200161490000700173520093200180653003401112100001801146856008601164 2010 eng d00aBayesian item selection in constrained adaptive testing0 aBayesian item selection in constrained adaptive testing a149-1690 v313 aApplication of Bayesian item selection criteria in computerized adaptive testing might result in improvement of bias and MSE of the ability estimates. The question remains how to apply Bayesian item selection criteria in the context of constrained adaptive testing, where large numbers of specifications have to be taken into account in the item selection process. The Shadow Test Approach is a general purpose algorithm for administering constrained CAT. In this paper it is shown how the approach can be slightly modified to handle Bayesian item selection criteria. No differences in performance were found between the shadow test approach and the modifiedapproach. In a simulation study of the LSAT, the effects of Bayesian item selection criteria are illustrated. The results are compared to item selection based on Fisher Information. General recommendations about the use of Bayesian item selection criteria are provided.10acomputerized adaptive testing1 aVeldkamp, B P uhttp://www.iacat.org/content/bayesian-item-selection-constrained-adaptive-testing01940nas a2200121 4500008004100000245010300041210006900144260010000213520135600313100001601669700001401685856011901699 2009 eng d00aA burdened CAT: Incorporating response burden with maximum Fisher's information for item selection0 aburdened CAT Incorporating response burden with maximum Fishers aIn D. J. Weiss (Ed.), Proceedings of the 2009 GMAC Conference on Computerized Adaptive Testing.3 aWidely used in various educational and vocational assessment applications, computerized adaptive testing (CAT) has recently begun to be used to measure patient-reported outcomes Although successful in reducing respondent burden, most current CAT algorithms do not formally consider it as part of the item selection process. This study used a loss function approach motivated by decision theory to develop an item selection method that incorporates respondent burden into the item selection process based on maximum Fisher information item selection. Several different loss functions placing varying degrees of importance on respondent burden were compared, using an item bank of 62 polytomous items measuring depressive symptoms. One dataset consisted of the real responses from the 730 subjects who responded to all the items. A second dataset consisted of simulated responses to all the items based on a grid of latent trait scores with replicates at each grid point. The algorithm enables a CAT administrator to more efficiently control the respondent burden without severely affecting the measurement precision than when using MFI alone. In particular, the loss function incorporating respondent burden protected respondents from receiving longer tests when their estimated trait score fell in a region where there were few informative items. 1 aSwartz, R J1 aChoi, S W uhttp://www.iacat.org/content/burdened-cat-incorporating-response-burden-maximum-fishers-information-item-selection02711nas a2200217 4500008004100000020004100041245010800082210006900190250001500259300001100274490000600285520189600291653003802187653002902225653005702254653001102311653001302322653002402335100001302359856012102372 2008 eng d a1529-7713 (Print)1529-7713 (Linking)00aBinary items and beyond: a simulation of computer adaptive testing using the Rasch partial credit model0 aBinary items and beyond a simulation of computer adaptive testin a2008/01/09 a81-1040 v93 aPast research on Computer Adaptive Testing (CAT) has focused almost exclusively on the use of binary items and minimizing the number of items to be administrated. To address this situation, extensive computer simulations were performed using partial credit items with two, three, four, and five response categories. Other variables manipulated include the number of available items, the number of respondents used to calibrate the items, and various manipulations of respondents' true locations. Three item selection strategies were used, and the theoretically optimal Maximum Information method was compared to random item selection and Bayesian Maximum Falsification approaches. The Rasch partial credit model proved to be quite robust to various imperfections, and systematic distortions did occur mainly in the absence of sufficient numbers of items located near the trait or performance levels of interest. The findings further indicate that having small numbers of items is more problematic in practice than having small numbers of respondents to calibrate these items. Most importantly, increasing the number of response categories consistently improved CAT's efficiency as well as the general quality of the results. In fact, increasing the number of response categories proved to have a greater positive impact than did the choice of item selection method, as the Maximum Information approach performed only slightly better than the Maximum Falsification approach. Accordingly, issues related to the efficiency of item selection methods are far less important than is commonly suggested in the literature. However, being based on computer simulations only, the preceding presumes that actual respondents behave according to the Rasch model. CAT research could thus benefit from empirical studies aimed at determining whether, and if so, how, selection strategies impact performance.10a*Data Interpretation, Statistical10a*User-Computer Interface10aEducational Measurement/*statistics & numerical data10aHumans10aIllinois10aModels, Statistical1 aLange, R uhttp://www.iacat.org/content/binary-items-and-beyond-simulation-computer-adaptive-testing-using-rasch-partial-credit00522nas a2200109 4500008004100000245007700041210006900118260009700187100001500284700001400299856009900313 2007 eng d00aBundle models for computerized adaptive testing in e-learning assessment0 aBundle models for computerized adaptive testing in elearning ass aD. J. Weiss (Ed.). Proceedings of the 2007 GMAC Conference on Computerized Adaptive Testing.1 aScalise, K1 aWilson, M uhttp://www.iacat.org/content/bundle-models-computerized-adaptive-testing-e-learning-assessment02242nas a2200205 4500008004100000020004600041245009500087210006900182260002700251300001200278490000700290520149400297653002701791653003001818653001701848653002501865100001901890700001001909856011701919 2005 eng d a1560-4292 (Print); 1560-4306 (Electronic)00aA Bayesian student model without hidden nodes and its comparison with item response theory0 aBayesian student model without hidden nodes and its comparison w bIOS Press: Netherlands a291-3230 v153 aThe Bayesian framework offers a number of techniques for inferring an individual's knowledge state from evidence of mastery of concepts or skills. A typical application where such a technique can be useful is Computer Adaptive Testing (CAT). A Bayesian modeling scheme, POKS, is proposed and compared to the traditional Item Response Theory (IRT), which has been the prevalent CAT approach for the last three decades. POKS is based on the theory of knowledge spaces and constructs item-to-item graph structures without hidden nodes. It aims to offer an effective knowledge assessment method with an efficient algorithm for learning the graph structure from data. We review the different Bayesian approaches to modeling student ability assessment and discuss how POKS relates to them. The performance of POKS is compared to the IRT two parameter logistic model. Experimental results over a 34 item Unix test and a 160 item French language test show that both approaches can classify examinees as master or non-master effectively and efficiently, with relatively comparable performance. However, more significant differences are found in favor of POKS for a second task that consists in predicting individual question item outcome. Implications of these results for adaptive testing and student modeling are discussed, as well as the limitations and advantages of POKS, namely the issue of integrating concepts into its structure. (PsycINFO Database Record (c) 2007 APA, all rights reserved)10aBayesian Student Model10acomputer adaptive testing10ahidden nodes10aItem Response Theory1 aDesmarais, M C1 aPu, X uhttp://www.iacat.org/content/bayesian-student-model-without-hidden-nodes-and-its-comparison-item-response-theory00567nas a2200097 4500008004100000245008000041210006900121260015300190100002300343856010300366 2003 eng d00aBayesian checks on outlying response times in computerized adaptive testing0 aBayesian checks on outlying response times in computerized adapt aH. Yanai, A. Okada, K. Shigemasu, Y. Kano, Y. and J. J. Meulman, (Eds.), New developments in psychometrics (pp. 215-222). New York: Springer-Verlag.1 avan der Linden, WJ uhttp://www.iacat.org/content/bayesian-checks-outlying-response-times-computerized-adaptive-testing01923nas a2200241 4500008004100000245009400041210006900135300001200204490000700216520110100223653002101324653001301345653003001358653005701388653000901445653003201454653002601486653002001512100001401532700001301546700001501559856010701574 2003 eng d00aA Bayesian method for the detection of item preknowledge in computerized adaptive testing0 aBayesian method for the detection of item preknowledge in comput a121-1370 v273 aWith the increased use of continuous testing in computerized adaptive testing, new concerns about test security have evolved, such as how to ensure that items in an item pool are safeguarded from theft. In this article, procedures to detect test takers using item preknowledge are explored. When test takers use item preknowledge, their item responses deviate from the underlying item response theory (IRT) model, and estimated abilities may be inflated. This deviation may be detected through the use of person-fit indices. A Bayesian posterior log odds ratio index is proposed for detecting the use of item preknowledge. In this approach to person fit, the estimated probability that each test taker has preknowledge of items is updated after each item response. These probabilities are based on the IRT parameters, a model specifying the probability that each item has been memorized, and the test taker's item responses. Simulations based on an operational computerized adaptive test (CAT) pool are used to demonstrate the use of the odds ratio index. (PsycINFO Database Record (c) 2005 APA )10aAdaptive Testing10aCheating10aComputer Assisted Testing10aIndividual Differences computerized adaptive testing10aItem10aItem Analysis (Statistical)10aMathematical Modeling10aResponse Theory1 aMcLeod, L1 aLewis, C1 aThissen, D uhttp://www.iacat.org/content/bayesian-method-detection-item-preknowledge-computerized-adaptive-testing00492nas a2200121 4500008004100000245009400041210006900135300001500204490000700219100002000226700001500246856010900261 2003 eng d00aA Bayesian method for the detection of item preknowledge in computerized adaptive testing0 aBayesian method for the detection of item preknowledge in comput a2, 121-1370 v271 aMcLeod L. D., C1 aThissen, D uhttp://www.iacat.org/content/bayesian-method-detection-item-preknowledge-computerized-adaptive-testing-000404nas a2200133 4500008004100000245005000041210004700091300001200138490000800150100001700158700001100175700001200186856007200198 1999 eng d00aA Bayesian random effects model for testlets 0 aBayesian random effects model for testlets a153-1680 v 641 aBradlow, E T1 aWainer1 aWang, X uhttp://www.iacat.org/content/bayesian-random-effects-model-testlets00427nas a2200109 4500008004100000245007800041210006900119300001000188490000700198100001600205856009600221 1999 eng d00aBenefits from computerized adaptive testing as seen in simulation studies0 aBenefits from computerized adaptive testing as seen in simulatio a91-980 v151 aHornke, L F uhttp://www.iacat.org/content/benefits-computerized-adaptive-testing-seen-simulation-studies00412nas a2200109 4500008004100000245006700041210006500108260001700173100001600190700001300206856008300219 1998 eng d00aA Bayesian approach to detection of item preknowledge in a CAT0 aBayesian approach to detection of item preknowledge in a CAT aSan Diego CA1 aMcLeod, L D1 aLewis, C uhttp://www.iacat.org/content/bayesian-approach-detection-item-preknowledge-cat01291nas a2200145 4500008004100000245007300041210006900114300001200183490000700195520080400202100001701006700001501023700001101038856009601049 1998 eng d00aBayesian identification of outliers in computerized adaptive testing0 aBayesian identification of outliers in computerized adaptive tes a910-9190 v933 aWe consider the problem of identifying examinees with aberrant response patterns in a computerized adaptive test (CAT). The vec-tor of responses yi of person i from the CAT comprise a multivariate response vector. Multivariate observations may be outlying in manydi erent directions and we characterize speci c directions as corre- sponding to outliers with different interpretations. We develop a class of outlier statistics to identify different types of outliers based on a con-trol chart type methodology. The outlier methodology is adaptable to general longitudinal discrete data structures. We consider several procedures to judge how extreme a particular outlier is. Data from the National Council Licensure EXamination (NCLEX) motivates our development and is used to illustrate the results.1 aBradlow, E T1 aWeiss, R E1 aCho, M uhttp://www.iacat.org/content/bayesian-identification-outliers-computerized-adaptive-testing00394nas a2200109 4500008004100000245005800041210005800099300001200157490000700169100002300176856008500199 1998 eng d00aBayesian item selection criteria for adaptive testing0 aBayesian item selection criteria for adaptive testing a201-2160 v631 avan der Linden, WJ uhttp://www.iacat.org/content/bayesian-item-selection-criteria-adaptive-testing-000430nas a2200097 4500008004100000245008700041210006900128260001500197100001300212856010700225 1997 eng d00aA Bayesian enhancement of Mantel Haenszel DIF analysis for computer adaptive tests0 aBayesian enhancement of Mantel Haenszel DIF analysis for compute aChicago IL1 aZwick, R uhttp://www.iacat.org/content/bayesian-enhancement-mantel-haenszel-dif-analysis-computer-adaptive-tests00392nas a2200109 4500008004100000245005800041210005800099300001200157490000700169100002300176856008300199 1996 eng d00aBayesian item selection criteria for adaptive testing0 aBayesian item selection criteria for adaptive testing a201-2160 v631 avan der Linden, WJ uhttp://www.iacat.org/content/bayesian-item-selection-criteria-adaptive-testing00503nas a2200097 4500008004100000245008200041210006900123260008500192100002300277856010500300 1996 eng d00aBayesian item selection criteria for adaptive testing (Research Report 96-01)0 aBayesian item selection criteria for adaptive testing Research R aTwente, The Netherlands: Department of Educational Measurement and Data Analysis1 avan der Linden, WJ uhttp://www.iacat.org/content/bayesian-item-selection-criteria-adaptive-testing-research-report-96-0100445nas a2200109 4500008004100000245007200041210006900113260002700182100001900209700001200228856009500240 1996 eng d00aBuilding a statistical foundation for computerized adaptive testing0 aBuilding a statistical foundation for computerized adaptive test aBanff, Alberta, Canada1 aChang, Hua-Hua1 aYing, Z uhttp://www.iacat.org/content/building-statistical-foundation-computerized-adaptive-testing00421nas a2200109 4500008004100000245006700041210006500108260002100173100001500194700001300209856008900222 1995 eng d00aA Bayesian computerized mastery model with multiple cut scores0 aBayesian computerized mastery model with multiple cut scores aSan Francisco CA1 aSmith, R L1 aLewis, C uhttp://www.iacat.org/content/bayesian-computerized-mastery-model-multiple-cut-scores00351nas a2200097 4500008004100000245004800041210004800089260001900137100002300156856007400179 1995 eng d00aBayesian item selection in adaptive testing0 aBayesian item selection in adaptive testing aMinneapolis MN1 avan der Linden, WJ uhttp://www.iacat.org/content/bayesian-item-selection-adaptive-testing00515nas a2200145 4500008004100000245008200041210006900123300001200192490000600204100001100210700001300221700001400234700001600248856010500264 1991 eng d00aBuilding algebra testlets: A comparison of hierarchical and linear structures0 aBuilding algebra testlets A comparison of hierarchical and linea axxx-xxx0 v81 aWainer1 aLewis, C1 aKaplan, B1 aBraswell, J uhttp://www.iacat.org/content/building-algebra-testlets-comparison-hierarchical-and-linear-structures00466nas a2200109 4500008004100000245009100041210006900132300001100201490000900212100001500221856012000236 1989 eng d00aBayesian adaptation during computer-based tests and computer-guided practice exercises0 aBayesian adaptation during computerbased tests and computerguide a89-1140 v5(1)1 aFrick, T W uhttp://www.iacat.org/content/bayesian-adaptation-during-computer-based-tests-and-computer-guided-practice-exercises00406nas a2200121 4500008004500000245005400045210005400099300001200153490000600165100001400171700001700185856008200202 1984 Engldsh 00aBias and Information of Bayesian Adaptive Testing0 aBias and Information of Bayesian Adaptive Testing a273-2850 v81 aWeiss, DJ1 aMcBride, J R uhttp://www.iacat.org/content/bias-and-information-bayesian-adaptive-testing-000400nas a2200121 4500008004100000245005400041210005400095300001200149490000600161100001400167700001700181856008000198 1984 eng d00aBias and information of Bayesian adaptive testing0 aBias and information of Bayesian adaptive testing a273-2850 v81 aWeiss, DJ1 aMcBride, J R uhttp://www.iacat.org/content/bias-and-information-bayesian-adaptive-testing00538nas a2200109 4500008004100000245007700041210006900118260010900187100001400296700001700310856010100327 1983 eng d00aBias and information of Bayesian adaptive testing (Research Report 83-2)0 aBias and information of Bayesian adaptive testing Research Repor aMinneapolis: University of Minnesota, Department of Psychology, Computerized Adaptive Testing Laboratory1 aWeiss, DJ1 aMcBride, J R uhttp://www.iacat.org/content/bias-and-information-bayesian-adaptive-testing-research-report-83-200502nas a2200097 4500008004100000245011200041210006900153260004600222100001400268856012200282 1979 eng d00aBayesian sequential design and analysis of dichotomous experiments with special reference to mental testing0 aBayesian sequential design and analysis of dichotomous experimen aPrinceton NJ: Educational Testing Service1 aOwen, R J uhttp://www.iacat.org/content/bayesian-sequential-design-and-analysis-dichotomous-experiments-special-reference-mental00431nas a2200109 4500008004100000245007700041210006900118300001200187490000600199100001700205856009900222 1977 eng d00aBayesian tailored testing and the influence of item bank characteristics0 aBayesian tailored testing and the influence of item bank charact a111-1200 v11 aJensema, C J uhttp://www.iacat.org/content/bayesian-tailored-testing-and-influence-item-bank-characteristics00433nas a2200109 4500008004100000245007700041210006900118300001200187490000600199100001700205856010100222 1977 En d00aBayesian Tailored Testing and the Influence of Item Bank Characteristics0 aBayesian Tailored Testing and the Influence of Item Bank Charact a111-1200 v11 aJensema, C J uhttp://www.iacat.org/content/bayesian-tailored-testing-and-influence-item-bank-characteristics-100478nas a2200097 4500008004100000245004100041210003900082260017700121100001700298856006500315 1977 eng d00aA brief overview of adaptive testing0 abrief overview of adaptive testing aD. J. Weiss (Ed.), Applications of computerized testing (Research Report 77-1). Minneapolis: University of Minnesota, Department of Psychology, Psychometric Methods Program1 aMcBride, J R uhttp://www.iacat.org/content/brief-overview-adaptive-testing00352nas a2200109 4500008004100000245005000041210004700091300001100138490000600149100001300155856007400168 1977 eng d00aA broad-range tailored test of verbal ability0 abroadrange tailored test of verbal ability a95-1000 v11 aLord, FM uhttp://www.iacat.org/content/broad-range-tailored-test-verbal-ability00356nas a2200109 4500008004100000245005100041210004700092300001100139490000600150100001400156856007600170 1977 En d00aA Broad-Range Tailored Test of Verbal Ability 0 aBroadRange Tailored Test of Verbal Ability a95-1000 v11 aLord, F M uhttp://www.iacat.org/content/broad-range-tailored-test-verbal-ability-100457nas a2200097 4500008004100000245004400041210004200085260014400127100001700271856007100288 1976 eng d00aBandwidth, fidelity, and adaptive tests0 aBandwidth fidelity and adaptive tests aT. J. McConnell, Jr. (Ed.), CAT/C 2 1975: The second conference on computer-assisted test construction. Atlanta GA: Atlanta Public Schools.1 aMcBride, J R uhttp://www.iacat.org/content/bandwidth-fidelity-and-adaptive-tests00556nas a2200097 4500008004100000245007700041210006900118260015300187100001700340856010100357 1976 eng d00aBayesian tailored testing and the influence of item bank characteristics0 aBayesian tailored testing and the influence of item bank charact aC. K. Clark (Ed.), Proceedings of the First Conference on Computerized Adaptive Testing (pp. 82-89). Washington DC: U.S. Government Printing Office.1 aJensema, C J uhttp://www.iacat.org/content/bayesian-tailored-testing-and-influence-item-bank-characteristics-000479nas a2200097 4500008004100000245005000041210004800091260015300139100001300292856007600305 1976 eng d00aA broad range tailored test of verbal ability0 abroad range tailored test of verbal ability aC. K. Clark (Ed.), Proceedings of the First Conference on Computerized Adaptive Testing (pp. 75-78). Washington DC: U.S. Government Printing Office.1 aLord, FM uhttp://www.iacat.org/content/broad-range-tailored-test-verbal-ability-000490nas a2200097 4500008004100000245008200041210006900123260008100192100001500273856010400288 1975 eng d00aA basic test theory generalizable to tailored testing (Technical Report No 1)0 abasic test theory generalizable to tailored testing Technical Re aLos Angeles CA: University of Southern California, Department of Psychology.1 aCliff, N A uhttp://www.iacat.org/content/basic-test-theory-generalizable-tailored-testing-technical-report-no-100464nas a2200109 4500008004100000245009900041210006900140300001200209490000700221100001400228856011200242 1975 eng d00aA Bayesian sequential procedure for quantal response in the context of adaptive mental testing0 aBayesian sequential procedure for quantal response in the contex a351-3560 v701 aOwen, R J uhttp://www.iacat.org/content/bayesian-sequential-procedure-quantal-response-context-adaptive-mental-testing00435nas a2200097 4500008004100000245009000041210006900131260001400200100001600214856010700230 1975 eng d00aBehavior of the maximum likelihood estimate in a simulated tailored testing situation0 aBehavior of the maximum likelihood estimate in a simulated tailo aIowa City1 aSamejima, F uhttp://www.iacat.org/content/behavior-maximum-likelihood-estimate-simulated-tailored-testing-situation00515nas a2200109 4500008004100000245007500041210006900116260008500185100001600270700001700286856010200303 1975 eng d00aBest test design and self-tailored testing (Research Memorandum No 19)0 aBest test design and selftailored testing Research Memorandum No aChicago: University of Chicago, Department of Education, Statistical Laboratory.1 aWright, B D1 aDouglas, G A uhttp://www.iacat.org/content/best-test-design-and-self-tailored-testing-research-memorandum-no-1900367nas a2200097 4500008004100000245005100041210004500092260004600137100001300183856007300196 1975 eng d00aA broad range test of verbal ability (RB-75-5)0 abroad range test of verbal ability RB755 aPrinceton NJ: Educational Testing Service1 aLord, FM uhttp://www.iacat.org/content/broad-range-test-verbal-ability-rb-75-500360nas a2200109 4500008004100000245004600041210004400087260002400131100001100155700001400166856007000180 1974 eng d00aA Bayesian approach in sequential testing0 aBayesian approach in sequential testing aChicago ILc04/19741 aHsu, T1 aPingel, K uhttp://www.iacat.org/content/bayesian-approach-sequential-testing00420nas a2200097 4500008004100000245006800041210006300109260004600172100001400218856009000232 1969 eng d00aA Bayesian approach to tailored testing (Research Report 69-92)0 aBayesian approach to tailored testing Research Report 6992 aPrinceton NJ: Educational Testing Service1 aOwen, R J uhttp://www.iacat.org/content/bayesian-approach-tailored-testing-research-report-69-9200444nas a2200097 4500008004100000245007500041210006900116260004600185100001600231856009900247 1969 eng d00aBayesian methods in psychological testing (Research Bulletin RB-69-31)0 aBayesian methods in psychological testing Research Bulletin RB69 aPrinceton NJ: Educational Testing Service1 aNovick, M R uhttp://www.iacat.org/content/bayesian-methods-psychological-testing-research-bulletin-rb-69-31