01075nas a2200133 4500008003900000245006600039210006600105490000700171520062900178100001400807700001900821700001700840856008400857 2015 d00aEvaluating Content Alignment in Computerized Adaptive Testing0 aEvaluating Content Alignment in Computerized Adaptive Testing0 v343 aThe alignment between a test and the content domain it measures represents key evidence for the validation of test score inferences. Although procedures have been developed for evaluating the content alignment of linear tests, these procedures are not readily applicable to computerized adaptive tests (CATs), which require large item pools and do not use fixed test forms. This article describes the decisions made in the development of CATs that influence and might threaten content alignment. It outlines a process for evaluating alignment that is sensitive to these threats and gives an empirical example of the process.1 aWise, S L1 aKingsbury, G G1 aWebb, N., L. uhttp://www.iacat.org/evaluating-content-alignment-computerized-adaptive-testing01600nas a2200121 4500008004500000245011000045210006900155490000700224520114200231100001901373700001401392856007201406 2011 Engldsh 00aCreating a K-12 Adaptive Test: Examining the Stability of Item Parameter Estimates and Measurement Scales0 aCreating a K12 Adaptive Test Examining the Stability of Item Par0 v123 a
Development of adaptive tests used in K-12 settings requires the creation of stable measurement scales to measure the growth of individual students from one grade to the next, and to measure change in groups from one year to the next. Accountability systems
like No Child Left Behind require stable measurement scales so that accountability has meaning across time. This study examined the stability of the measurement scales used with the Measures of Academic Progress. Difficulty estimates for test questions from the reading and mathematics scales were examined over a period ranging from 7 to 22 years. Results showed high correlations between item difficulty estimates from the time at which they where originally calibrated and the current calibration. The average drift in item difficulty estimates was less than .01 standard deviations. The average impact of change in item difficulty estimates was less than the smallest reported difference on the score scale for two actual tests. The findings of the study indicate that an IRT scale can be stable enough to allow consistent measurement of student achievement.
Traditional adaptive tests provide an efficient method for estimating student achievements levels, by adjusting the characteristicsof the test questions to match the performance of each student. These traditional adaptive tests are not designed to identify diosyncraticknowledge patterns. As students move through their education, they learn content in any number of different ways related to their learning style and cognitive development. This may result in a student having different achievement levels from one content area to another within a domain of content. This study investigates whether such idiosyncratic knowledge patterns exist. It discusses the differences between idiosyncratic knowledge patterns and multidimensionality. Finally, it proposes an adaptive testing procedure that can be used to identify a student’s areas of strength and weakness more efficiently than current adaptive testing approaches. The findings of the study indicate that a fairly large number of students may have test results that are influenced by their idiosyncratic knowledge patterns. The findings suggest that these patterns persist across time for a large number of students, and that the differences in student performance between content areas within a subject domain are large enough to allow them to be useful in instruction. Given the existence of idiosyncratic patterns of knowledge, the proposed testing procedure may enable us to provide more useful information to teachers. It should also allow us to differentiate between idiosyncratic patterns or knowledge, and important mutidimensionality in the testing data.
10acomputerized adaptive testing1 aKingsbury, G G1 aHouser, R L uhttp://www.iacat.org/content/icat-adaptive-testing-procedure-identification-idiosyncratic-knowledge-patterns00504nas a2200121 4500008004100000245009900041210006900140300001200209490001100221100001900232700001600251856011500267 2008 eng d00aICAT: An adaptive testing procedure for the identification of idiosyncratic knowledge patterns0 aICAT An adaptive testing procedure for the identification of idi a40–480 v216(1)1 aKingsbury, G G1 aHouser, R L uhttp://www.iacat.org/content/icat-adaptive-testing-procedure-identification-idiosyncratic-knowledge-patterns-000575nas a2200109 4500008004100000245010400041210006900145260009700214100001900311700001600330856011900346 2007 eng d00aICAT: An adaptive testing procedure to allow the identification of idiosyncratic knowledge patterns0 aICAT An adaptive testing procedure to allow the identification o aD. J. Weiss (Ed.). Proceedings of the 2007 GMAC Conference on Computerized Adaptive Testing.1 aKingsbury, G G1 aHouser, R L uhttp://www.iacat.org/content/icat-adaptive-testing-procedure-allow-identification-idiosyncratic-knowledge-patterns00415nas a2200109 4500008004100000245006300041210006300104260001700167100001900184700001400203856008800217 2004 eng d00aComputer adaptive testing and the No Child Left Behind Act0 aComputer adaptive testing and the No Child Left Behind Act aSan Diego CA1 aKingsbury, G G1 aHauser, C uhttp://www.iacat.org/content/computer-adaptive-testing-and-no-child-left-behind-act00528nas a2200109 4500008004100000245010600041210006900147260002500216653003400241100001900275856012400294 2002 eng d00aAn empirical comparison of achievement level estimates from adaptive tests and paper-and-pencil tests0 aempirical comparison of achievement level estimates from adaptiv aNew Orleans, LA. USA10acomputerized adaptive testing1 aKingsbury, G G uhttp://www.iacat.org/content/empirical-comparison-achievement-level-estimates-adaptive-tests-and-paper-and-pencil-tests00478nas a2200097 4500008004100000245010600041210006900147260001900216100001900235856012600254 2002 eng d00aAn empirical comparison of achievement level estimates from adaptive tests and paper-and-pencil tests0 aempirical comparison of achievement level estimates from adaptiv aNew Orleans LA1 aKingsbury, G G uhttp://www.iacat.org/content/empirical-comparison-achievement-level-estimates-adaptive-tests-and-paper-and-pencil-tests-000488nas a2200121 4500008003900000245009100039210006900130300001200199490000700211100001400218700001900232856011500251 2000 d00aPractical issues in developing and maintaining a computerized adaptive testing program0 aPractical issues in developing and maintaining a computerized ad a135-1550 v211 aWise, S L1 aKingsbury, G G uhttp://www.iacat.org/content/practical-issues-developing-and-maintaining-computerized-adaptive-testing-program00490nas a2200109 4500008004100000245009900041210006900140260002100209100001900230700001200249856011900261 1999 eng d00aA comparison of conventional and adaptive testing procedures for making single-point decisions0 acomparison of conventional and adaptive testing procedures for m aMontreal, Canada1 aKingsbury, G G1 aZara, A uhttp://www.iacat.org/content/comparison-conventional-and-adaptive-testing-procedures-making-single-point-decisions00521nas a2200109 4500008004100000245006300041210006300104260012100167100001900288700001600307856008800323 1999 eng d00aDeveloping computerized adaptive tests for school children0 aDeveloping computerized adaptive tests for school children aF. Drasgow and J. B. Olson-Buchanan (Eds.), Innovations in computerized assessment (pp. 93-115). Mahwah NJ: Erlbaum.1 aKingsbury, G G1 aHouser, R L uhttp://www.iacat.org/content/developing-computerized-adaptive-tests-school-children00504nas a2200109 4500008004100000245010600041210006900147260002100216100001900237700001200256856012600268 1999 eng d00aA procedure to compare conventional and adaptive testing procedures for making single-point decisions0 aprocedure to compare conventional and adaptive testing procedure aMontreal, Canada1 aKingsbury, G G1 aZara, A uhttp://www.iacat.org/content/procedure-compare-conventional-and-adaptive-testing-procedures-making-single-point-decisions00403nas a2200097 4500008004100000245006700041210006700108260002100175100001900196856009000215 1999 eng d00aStandard errors of proficiency estimates in stratum scored CAT0 aStandard errors of proficiency estimates in stratum scored CAT aMontreal, Canada1 aKingsbury, G G uhttp://www.iacat.org/content/standard-errors-proficiency-estimates-stratum-scored-cat00328nas a2200097 4500008004100000245004200041210004200083260001500125100001900140856007100159 1997 eng d00aItem pool development and maintenance0 aItem pool development and maintenance aChicago IL1 aKingsbury, G G uhttp://www.iacat.org/content/item-pool-development-and-maintenance00464nas a2200097 4500008004100000245010700041210006900148260001500217100001900232856011500251 1997 eng d00aSome questions that must be addressed to develop and maintain an item pool for use in an adaptive test0 aSome questions that must be addressed to develop and maintain an aChicago IL1 aKingsbury, G G uhttp://www.iacat.org/content/some-questions-must-be-addressed-develop-and-maintain-item-pool-use-adaptive-test00311nas a2200097 4500008004100000245003700041210003700078260001300115100001900128856006600147 1996 eng d00aItem review and adaptive testing0 aItem review and adaptive testing aNew York1 aKingsbury, G G uhttp://www.iacat.org/content/item-review-and-adaptive-testing00513nas a2200133 4500008004100000245008100041210006900122300001000191490000700201653003400208100001900242700001600261856010200277 1993 eng d00aAssessing the utility of item response models: computerized adaptive testing0 aAssessing the utility of item response models computerized adapt a21-270 v1210acomputerized adaptive testing1 aKingsbury, G G1 aHouser, R L uhttp://www.iacat.org/content/assessing-utility-item-response-models-computerized-adaptive-testing00414nas a2200121 4500008004100000245005600041210005200097260001500149100001400164700001900178700001600197856007900213 1993 eng d00aAn investigation of restricted self-adapted testing0 ainvestigation of restricted selfadapted testing aAtlanta GA1 aWise, S L1 aKingsbury, G G1 aHouser, R L uhttp://www.iacat.org/content/investigation-restricted-self-adapted-testing00455nas a2200097 4500008004100000245009900041210006900140100001900209700001600228856011300244 1993 eng d00aA practical examination of the use of free-response questions in computerized adaptive testing0 apractical examination of the use of freeresponse questions in co1 aKingsbury, G G1 aHouser, R L uhttp://www.iacat.org/content/practical-examination-use-free-response-questions-computerized-adaptive-testing00367nas a2200085 4500008004100000245006800041210006500109100001900174856008800193 1991 eng d00aA comparison of procedures for content-sensitive item selection0 acomparison of procedures for contentsensitive item selection1 aKingsbury, G G uhttp://www.iacat.org/content/comparison-procedures-content-sensitive-item-selection00496nas a2200121 4500008004100000245009900041210006900140300001200209490000600221100001900227700001200246856011600258 1991 eng d00aA comparison of procedures for content-sensitive item selection in computerized adaptive tests0 acomparison of procedures for contentsensitive item selection in a241-2610 v41 aKingsbury, G G1 aZara, A uhttp://www.iacat.org/content/comparison-procedures-content-sensitive-item-selection-computerized-adaptive-tests00461nas a2200109 4500008004100000245009200041210006900133300000800202490001100210100001900221856011100240 1990 eng d00aAdapting adaptive testing: Using the MicroCAT Testing System in a local school district0 aAdapting adaptive testing Using the MicroCAT Testing System in a a3-60 v29 (2)1 aKingsbury, G G uhttp://www.iacat.org/content/adapting-adaptive-testing-using-microcat-testing-system-local-school-district00454nas a2200109 4500008004100000245008100041210006900122260001400191100001900205700001600224856010400240 1990 eng d00aAssessing the utility of item response models: Computerized adaptive testing0 aAssessing the utility of item response models Computerized adapt aBoston MA1 aKingsbury, G G1 aHouser, R L uhttp://www.iacat.org/content/assessing-utility-item-response-models-computerized-adaptive-testing-000520nas a2200109 4500008004100000245013200041210006900173260001800242100001900260700001600279856011500295 1989 eng d00aAssessing the impact of using item parameter estimates obtained from paper-and-pencil testing for computerized adaptive testing0 aAssessing the impact of using item parameter estimates obtained aSan Francisco1 aKingsbury, G G1 aHouser, R L uhttp://www.iacat.org/content/assessing-impact-using-item-parameter-estimates-obtained-paper-and-pencil-testing00434nas a2200121 4500008004100000245006700041210006700108300001200175490000600187100001900193700001200212856008800224 1989 eng d00aProcedures for selecting items for computerized adaptive tests0 aProcedures for selecting items for computerized adaptive tests a359-3750 v21 aKingsbury, G G1 aZara, A uhttp://www.iacat.org/content/procedures-selecting-items-computerized-adaptive-tests00509nas a2200109 4500008004100000245011200041210006900153260001900222100001900241700001600260856012300276 1988 eng d00aA comparison of achievement level estimates from computerized adaptive testing and paper-and-pencil testing0 acomparison of achievement level estimates from computerized adap aNew Orleans LA1 aKingsbury, G G1 aHouser, R L uhttp://www.iacat.org/content/comparison-achievement-level-estimates-computerized-adaptive-testing-and-paper-and-pencil00479nas a2200121 4500008004100000245008700041210006900128300001000197490001100207100001900218700001200237856010800249 1988 eng d00aComputerized adaptive testing: A four-year-old pilot study shows that CAT can work0 aComputerized adaptive testing A fouryearold pilot study shows th a73-760 v16 (4)1 aKingsbury, G G1 aet. al. uhttp://www.iacat.org/content/computerized-adaptive-testing-four-year-old-pilot-study-shows-cat-can-work00518nas a2200097 4500008004100000245005100041210005000092260018200142100001900324856007700343 1986 eng d00aComputerized adaptive testing: A pilot project0 aComputerized adaptive testing A pilot project aW. C. Ryan (ed.), Proceedings: NECC 86, National Educational Computing Conference (pp.172-176). Eugene OR: University of Oregon, International Council on Computers in Education.1 aKingsbury, G G uhttp://www.iacat.org/content/computerized-adaptive-testing-pilot-project00649nas a2200121 4500008004100000245022600041210006900267300000900336490000700345653003400352100001900386856012200405 1985 eng d00aAdaptive self-referenced testing as a procedure for the measurement of individual change due to instruction: A comparison of the reliabilities of change estimates obtained from conventional and adaptive testing procedures0 aAdaptive selfreferenced testing as a procedure for the measureme a30570 v4510acomputerized adaptive testing1 aKingsbury, G G uhttp://www.iacat.org/content/adaptive-self-referenced-testing-procedure-measurement-individual-change-due-instruction00642nam a2200097 4500008004100000245022200041210006900263260007500332100001900407856011800426 1984 eng d00aAdaptive self-referenced testing as a procedure for the measurement of individual change in instruction: A comparison of the reliabilities of change estimates obtained from conventional and adaptive testing procedures0 aAdaptive selfreferenced testing as a procedure for the measureme aUnpublished doctoral dissertation, Univerity of Minnesota, Minneapolis1 aKingsbury, G G uhttp://www.iacat.org/content/adaptive-self-referenced-testing-procedure-measurement-individual-change-instruction00621nas a2200109 4500008004100000245009800041210006900139260014800208100001900356700001400375856012200389 1983 eng d00aA comparison of IRT-based adaptive mastery testing and a sequential mastery testing procedure0 acomparison of IRTbased adaptive mastery testing and a sequential aD. J. Weiss (Ed.), New horizons in testing: Latent trait test theory and computerized adaptive testing (pp. 257-283). New York: Academic Press.1 aKingsbury, G G1 aWeiss, DJ uhttp://www.iacat.org/content/comparison-irt-based-adaptive-mastery-testing-and-sequential-mastery-testing-procedure-000535nas a2200121 4500008004100000245009900041210006900140260003900209300001200248100001900260700001400279856012000293 1983 eng d00aA comparison of IRT-based adaptive mastery testing and a sequential mastery testing procedure.0 acomparison of IRTbased adaptive mastery testing and a sequential aNew York, NY. USAbAcademic Press. a258-2831 aKingsbury, G G1 aWeiss, DJ uhttp://www.iacat.org/content/comparison-irt-based-adaptive-mastery-testing-and-sequential-mastery-testing-procedure00601nas a2200109 4500008004100000245010900041210006900150260011400219100001900333700001400352856012500366 1981 eng d00aA validity comparison of adaptive and conventional strategies for mastery testing (Research Report 81-3)0 avalidity comparison of adaptive and conventional strategies for aMinneapolis, Department of Psychology, Psychometric Methods Program, Computerized Adaptive Testing Laboratory1 aKingsbury, G G1 aWeiss, DJ uhttp://www.iacat.org/content/validity-comparison-adaptive-and-conventional-strategies-mastery-testing-research-report-8100630nas a2200109 4500008004100000245014500041210006900186260011400255100001900369700001400388856011800402 1980 eng d00aAn alternate-forms reliability and concurrent validity comparison of Bayesian adaptive and conventional ability tests (Research Report 80-5)0 aalternateforms reliability and concurrent validity comparison of aMinneapolis, Department of Psychology, Psychometric Methods Program, Computerized Adaptive Testing Laboratory1 aKingsbury, G G1 aWeiss, DJ uhttp://www.iacat.org/content/alternate-forms-reliability-and-concurrent-validity-comparison-bayesian-adaptive-and00608nas a2200109 4500008004100000245012300041210006900164260011400233100001900347700001400366856011800380 1980 eng d00aA comparison of adaptive, sequential, and conventional testing strategies for mastery decisions (Research Report 80-4)0 acomparison of adaptive sequential and conventional testing strat aMinneapolis, Department of Psychology, Psychometric Methods Program, Computerized Adaptive Testing Laboratory1 aKingsbury, G G1 aWeiss, DJ uhttp://www.iacat.org/content/comparison-adaptive-sequential-and-conventional-testing-strategies-mastery-decisions00706nas a2200109 4500008004100000245009600041210006900137260024100206100001900447700001400466856011600480 1980 eng d00aA comparison of ICC-based adaptive mastery testing and the Waldian probability ratio method0 acomparison of ICCbased adaptive mastery testing and the Waldian aD. J. Weiss (Ed.). Proceedings of the 1979 Computerized Adaptive Testing Conference (pp. 120-139). Minneapolis MN: University of Minnesota, Department of Psychology, Psychometric Methods Program, Computerized Adaptive Testing Laboratory1 aKingsbury, G G1 aWeiss, DJ uhttp://www.iacat.org/content/comparison-icc-based-adaptive-mastery-testing-and-waldian-probability-ratio-method00526nas a2200109 4500008004100000245007800041210006900119260009700188100001900285700001400304856009800318 1979 eng d00aAn adaptive testing strategy for mastery decisions (Research Report 79-5)0 aadaptive testing strategy for mastery decisions Research Report aMinneapolis: University of Minnesota, Department of Psychology, Psychometric Methods Program1 aKingsbury, G G1 aWeiss, DJ uhttp://www.iacat.org/content/adaptive-testing-strategy-mastery-decisions-research-report-79-500560nas a2200121 4500008004100000245009900041210006900140260007200209100001500281700001400296700001900310856010900329 1977 eng d00aCalibration of an item pool for the adaptive measurement of achievement (Research Report 77-5)0 aCalibration of an item pool for the adaptive measurement of achi aMinneapolis: Department of Psychology, Psychometric Methods Program1 aBejar, I I1 aWeiss, DJ1 aKingsbury, G G uhttp://www.iacat.org/content/calibration-item-pool-adaptive-measurement-achievement-research-report-77-5