01435nas a2200169 4500008003900000245009000039210006900129300001000198490000800208520081300216653002901029653003601058653003001094100001501124700001201139856011401151 2008 d00aComputer Adaptive-Attribute Testing A New Approach to Cognitive Diagnostic Assessment0 aComputer AdaptiveAttribute Testing A New Approach to Cognitive D a29-390 v2163 a
The influence of interdisciplinary forces stemming from developments in cognitive science,mathematical statistics, educational
psychology, and computing science are beginning to appear in educational and psychological assessment. Computer adaptive-attribute testing (CA-AT) is one example. The concepts and procedures in CA-AT can be found at the intersection between computer adaptive testing and cognitive diagnostic assessment. CA-AT allows us to fuse the administrative benefits of computer adaptive testing with the psychological benefits of cognitive diagnostic assessment to produce an innovative psychologically-based adaptive testing approach. We describe the concepts behind CA-AT as well as illustrate how it can be used to promote formative, computer-based, classroom assessment.
10acognition and assessment10acognitive diagnostic assessment10acomputer adaptive testing1 aGierl, M J1 aZhou, J uhttp://www.iacat.org/content/computer-adaptive-attribute-testing-new-approach-cognitive-diagnostic-assessment01948nas a2200301 4500008004500000020001400045245012900059210006900188300001200257490000700269520093500276653002501211653002101236653002501257653003001282653003001312653001001342653001501352653002601367653002501393653002401418653001501442653001501457100001301472700001701485700002101502856012301523 2007 Engldsh a0146-621600aComputerized adaptive testing for polytomous motivation items: Administration mode effects and a comparison with short forms0 aComputerized adaptive testing for polytomous motivation items Ad a412-4290 v313 aIn a randomized experiment (n=515), a computerized and a computerized adaptive test (CAT) are compared. The item pool consists of 24 polytomous motivation items. Although items are carefully selected, calibration data show that Samejima's graded response model did not fit the data optimally. A simulation study is done to assess possible consequences of model misfit. CAT efficiency was studied by a systematic comparison of the CAT with two types of conventional fixed length short forms, which are created to be good CAT competitors. Results showed no essential administration mode effects. Efficiency analyses show that CAT outperformed the short forms in almost all aspects when results are aggregated along the latent trait scale. The real and the simulated data results are very similar, which indicate that the real data results are not affected by model misfit. (PsycINFO Database Record (c) 2007 APA ) (journal abstract)10a2220 Tests & Testing10aAdaptive Testing10aAttitude Measurement10acomputer adaptive testing10aComputer Assisted Testing10aitems10aMotivation10apolytomous motivation10aStatistical Validity10aTest Administration10aTest Forms10aTest Items1 aHol, A M1 aVorst, H C M1 aMellenbergh, G J uhttp://www.iacat.org/content/computerized-adaptive-testing-polytomous-motivation-items-administration-mode-effects-and01457nas a2200181 4500008004100000245008200041210006900123260001300192490000800205520080800213653000801021653001901029653003001048653003401078653004001112100001801152856010501170 2007 eng d00aA practitioner's guide to variable-length computerized classification testing0 apractitioners guide to variablelength computerized classificatio c7/1/20090 v12 3 aVariable-length computerized classification tests, CCTs, (Lin & Spray, 2000; Thompson, 2006) are a powerful and efficient approach to testing for the purpose of classifying examinees into groups. CCTs are designed by the specification of at least five technical components: psychometric model, calibrated item bank, starting point, item selection algorithm, and termination criterion. Several options exist for each of these CCT components, creating a myriad of possible designs. Confusion among designs is exacerbated by the lack of a standardized nomenclature. This article outlines the components of a CCT, common options for each component, and the interaction of options for different components, so that practitioners may more efficiently design CCTs. It also offers a suggestion of nomenclature. 10aCAT10aclassification10acomputer adaptive testing10acomputerized adaptive testing10aComputerized classification testing1 aThompson, N A uhttp://www.iacat.org/content/practitioners-guide-variable-length-computerized-classification-testing01637nas a2200217 4500008004100000020004600041245008900087210006900176260002500245300000900270490000700279520083400286653001901120653002801139653003001167653003301197653002101230653003501251100001801286856011501304 2006 eng d a0895-7347 (Print); 1532-4818 (Electronic)00aApplying Bayesian item selection approaches to adaptive tests using polytomous items0 aApplying Bayesian item selection approaches to adaptive tests us bLawrence Erlbaum: US a1-200 v193 aThis study applied the maximum expected information (MEI) and the maximum posterior- weighted information (MPI) approaches of computer adaptive testing item selection to the case of a test using polytomous items following the partial credit model. The MEI and MPI approaches are described. A simulation study compared the efficiency of ability estimation using the MEI and MPI approaches to the traditional maximal item information (MII) approach. The results of the simulation study indicated that the MEI and MPI approaches led to a superior efficiency of ability estimation compared with the MII approach. The superiority of the MEI and MPI approaches over the MII approach was greatest when the bank contained items having a relatively peaked information function. (PsycINFO Database Record (c) 2007 APA, all rights reserved)10aadaptive tests10aBayesian item selection10acomputer adaptive testing10amaximum expected information10apolytomous items10aposterior weighted information1 aPenfield, R D uhttp://www.iacat.org/content/applying-bayesian-item-selection-approaches-adaptive-tests-using-polytomous-items02346nas a2200217 4500008004100000020002200041245008000063210006900143260002600212300001000238490000700248520160400255653003001859653002101889653003201910653003001942653002501972100001501997700001502012856010102027 2006 eng d a0146-6216 (Print)00aSIMCAT 1.0: A SAS computer program for simulating computer adaptive testing0 aSIMCAT 10 A SAS computer program for simulating computer adaptiv bSage Publications: US a60-610 v303 aMonte Carlo methodologies are frequently applied to study the sampling distribution of the estimated proficiency level in adaptive testing. These methods eliminate real situational constraints. However, these Monte Carlo methodologies are not currently supported by the available software programs, and when these programs are available, their flexibility is limited. SIMCAT 1.0 is aimed at the simulation of adaptive testing sessions under different adaptive expected a posteriori (EAP) proficiency-level estimation methods (Blais & Raîche, 2005; Raîche & Blais, 2005) based on the one-parameter Rasch logistic model. These methods are all adaptive in the a priori proficiency-level estimation, the proficiency-level estimation bias correction, the integration interval, or a combination of these factors. The use of these adaptive EAP estimation methods diminishes considerably the shrinking, and therefore biasing, effect of the estimated a priori proficiency level encountered when this a priori is fixed at a constant value independently of the computed previous value of the proficiency level. SIMCAT 1.0 also computes empirical and estimated skewness and kurtosis coefficients, such as the standard error, of the estimated proficiency-level sampling distribution. In this way, the program allows one to compare empirical and estimated properties of the estimated proficiency-level sampling distribution under different variations of the EAP estimation method: standard error and bias, like the skewness and kurtosis coefficients. (PsycINFO Database Record (c) 2007 APA, all rights reserved)10acomputer adaptive testing10acomputer program10aestimated proficiency level10aMonte Carlo methodologies10aRasch logistic model1 aRaîche, G1 aBlais, J-G uhttp://www.iacat.org/content/simcat-10-sas-computer-program-simulating-computer-adaptive-testing02242nas a2200205 4500008004100000020004600041245009500087210006900182260002700251300001200278490000700290520149400297653002701791653003001818653001701848653002501865100001901890700001001909856011701919 2005 eng d a1560-4292 (Print); 1560-4306 (Electronic)00aA Bayesian student model without hidden nodes and its comparison with item response theory0 aBayesian student model without hidden nodes and its comparison w bIOS Press: Netherlands a291-3230 v153 aThe Bayesian framework offers a number of techniques for inferring an individual's knowledge state from evidence of mastery of concepts or skills. A typical application where such a technique can be useful is Computer Adaptive Testing (CAT). A Bayesian modeling scheme, POKS, is proposed and compared to the traditional Item Response Theory (IRT), which has been the prevalent CAT approach for the last three decades. POKS is based on the theory of knowledge spaces and constructs item-to-item graph structures without hidden nodes. It aims to offer an effective knowledge assessment method with an efficient algorithm for learning the graph structure from data. We review the different Bayesian approaches to modeling student ability assessment and discuss how POKS relates to them. The performance of POKS is compared to the IRT two parameter logistic model. Experimental results over a 34 item Unix test and a 160 item French language test show that both approaches can classify examinees as master or non-master effectively and efficiently, with relatively comparable performance. However, more significant differences are found in favor of POKS for a second task that consists in predicting individual question item outcome. Implications of these results for adaptive testing and student modeling are discussed, as well as the limitations and advantages of POKS, namely the issue of integrating concepts into its structure. (PsycINFO Database Record (c) 2007 APA, all rights reserved)10aBayesian Student Model10acomputer adaptive testing10ahidden nodes10aItem Response Theory1 aDesmarais, M C1 aPu, X uhttp://www.iacat.org/content/bayesian-student-model-without-hidden-nodes-and-its-comparison-item-response-theory02000nas a2200205 4500008004100000020004600041245007900087210006900166260004100235300001400276490000700290520127200297653003001569653002501599653003401624100001301658700001801671700001601689856008901705 2005 eng d a0017-9124 (Print); 1475-6773 (Electronic)00aDynamic assessment of health outcomes: Time to let the CAT out of the bag?0 aDynamic assessment of health outcomes Time to let the CAT out of bBlackwell Publishing: United Kingdom a1694-17110 v403 aBackground: The use of item response theory (IRT) to measure self-reported outcomes has burgeoned in recent years. Perhaps the most important application of IRT is computer-adaptive testing (CAT), a measurement approach in which the selection of items is tailored for each respondent. Objective. To provide an introduction to the use of CAT in the measurement of health outcomes, describe several IRT models that can be used as the basis of CAT, and discuss practical issues associated with the use of adaptive scaling in research settings. Principal Points: The development of a CAT requires several steps that are not required in the development of a traditional measure including identification of "starting" and "stopping" rules. CAT's most attractive advantage is its efficiency. Greater measurement precision can be achieved with fewer items. Disadvantages of CAT include the high cost and level of technical expertise required to develop a CAT. Conclusions: Researchers, clinicians, and patients benefit from the availability of psychometrically rigorous measures that are not burdensome. CAT outcome measures hold substantial promise in this regard, but their development is not without challenges. (PsycINFO Database Record (c) 2007 APA, all rights reserved)10acomputer adaptive testing10aItem Response Theory10aself reported health outcomes1 aCook, KF1 aO'Malley, K J1 aRoddey, T S uhttp://www.iacat.org/content/dynamic-assessment-health-outcomes-time-let-cat-out-bag00511nas a2200157 4500008004100000020001000041245004300051210004300094260002600137653003000163653002800193653003400221653001700255100001200272856006900284 2005 eng d a05-0500aRecent trends in comparability studies0 aRecent trends in comparability studies bPearsoncAugust, 200510acomputer adaptive testing10aComputerized assessment10adifferential item functioning10aMode effects1 aPaek, P uhttp://www.iacat.org/content/recent-trends-comparability-studies