01532nas a2200169 4500008003900000245007100039210006900110300001200179490000700191520103000198100001601228700001801244700001801262700001801280700001901298856004501317 2018 d00aLatent Class Analysis of Recurrent Events in Problem-Solving Items0 aLatent Class Analysis of Recurrent Events in ProblemSolving Item a478-4980 v423 aComputer-based assessment of complex problem-solving abilities is becoming more and more popular. In such an assessment, the entire problem-solving process of an examinee is recorded, providing detailed information about the individual, such as behavioral patterns, speed, and learning trajectory. The problem-solving processes are recorded in a computer log file which is a time-stamped documentation of events related to task completion. As opposed to cross-sectional response data from traditional tests, process data in log files are massive and irregularly structured, calling for effective exploratory data analysis methods. Motivated by a specific complex problem-solving item “Climate Control” in the 2012 Programme for International Student Assessment, the authors propose a latent class analysis approach to analyzing the events occurred in the problem-solving processes. The exploratory latent class analysis yields meaningful latent classes. Simulation studies are conducted to evaluate the proposed approach.1 aXu, Haochen1 aFang, Guanhua1 aChen, Yunxiao1 aLiu, Jingchen1 aYing, Zhiliang uhttps://doi.org/10.1177/014662161774832503826nas a2200157 4500008004100000245008500041210006900126260005500195520325800250653000803508653002203516653002303538100001603561700002003577856007103597 2017 eng d00aA Large-Scale Progress Monitoring Application with Computerized Adaptive Testing0 aLargeScale Progress Monitoring Application with Computerized Ada aNiigata, JapanbNiigata Seiryo Universityc08/20173 a
Many conventional assessment tools are available to teachers in schools for monitoring student progress in a formative manner. The outcomes of these assessment tools are essential to teachers’ instructional modifications and schools’ data-driven educational strategies, such as using remedial activities and planning instructional interventions for students with learning difficulties. When measuring student progress toward instructional goals or outcomes, assessments should be not only considerably precise but also sensitive to individual change in learning. Unlike conventional paper-pencil assessments that are usually not appropriate for every student, computerized adaptive tests (CATs) are highly capable of estimating growth consistently with minimum and consistent error. Therefore, CATs can be used as a progress monitoring tool in measuring student growth.
This study focuses on an operational CAT assessment that has been used for measuring student growth in reading during the academic school year. The sample of this study consists of nearly 7 million students from the 1st grade to the 12th grade in the US. The students received a CAT-based reading assessment periodically during the school year. The purpose of these periodical assessments is to measure the growth in students’ reading achievement and identify the students who may need additional instructional support (e.g., academic interventions). Using real data, this study aims to address the following research questions: (1) How many CAT administrations are necessary to make psychometrically sound decisions about the need for instructional changes in the classroom or when to provide academic interventions?; (2) What is the ideal amount of time between CAT administrations to capture student growth for the purpose of producing meaningful decisions from assessment results?
To address these research questions, we first used the Theil-Sen estimator for robustly fitting a regression line to each student’s test scores obtained from a series of CAT administrations. Next, we used the conditional standard error of measurement (cSEM) from the CAT administrations to create an error band around the Theil-Sen slope (i.e., student growth rate). This process resulted in the normative slope values across all the grade levels. The optimal number of CAT administrations was established from grade-level regression results. The amount of time needed for progress monitoring was determined by calculating the amount of time required for a student to show growth beyond the median cSEM value for each grade level. The results showed that the normative slope values were the highest for lower grades and declined steadily as grade level increased. The results also suggested that the CAT-based reading assessment is most useful for grades 1 through 4, since most struggling readers requiring an intervention appear to be within this grade range. Because CAT yielded very similar cSEM values across administrations, the amount of error in the progress monitoring decisions did not seem to depend on the number of CAT administrations.
10aCAT10aLarge-Scale tests10aProcess monitoring1 aBulut, Okan1 aCormier, Damien uhttps://drive.google.com/open?id=1uGbCKenRLnqTxImX1fZicR2c7GRV6Udc00666nas a2200193 4500008003900000022001400039245008000053210006900133300001000202490000600212653004000218653002600258653003300284653002600317653002100343100002200364700002600386856006000412 2017 d a2165-659200aLatent-Class-Based Item Selection for Computerized Adaptive Progress Tests0 aLatentClassBased Item Selection for Computerized Adaptive Progre a22-430 v510acomputerized adaptive progress test10aitem selection method10aKullback-Leibler information10aLatent class analysis10alog-odds scoring1 avan Buuren, Nikky1 aEggen, Theo, J. H. M. uhttp://iacat.org/jcat/index.php/jcat/article/view/62/2901220nas a2200133 4500008003900000022001400039245003600053210003600089300001400125490000700139520088200146100001701028856004101045 2013 d a1745-398400aLongitudinal Multistage Testing0 aLongitudinal Multistage Testing a447–4680 v503 aThis article introduces longitudinal multistage testing (lMST), a special form of multistage testing (MST), as a method for adaptive testing in longitudinal large-scale studies. In lMST designs, test forms of different difficulty levels are used, whereas the values on a pretest determine the routing to these test forms. Since lMST allows for testing in paper and pencil mode, lMST may represent an alternative to conventional testing (CT) in assessments for which other adaptive testing designs are not applicable. In this article the performance of lMST is compared to CT in terms of test targeting as well as bias and efficiency of ability and change estimates. Using a simulation study, the effect of the stability of ability across waves, the difficulty level of the different test forms, and the number of link items between the test forms were investigated.
1 aPohl, Steffi uhttp://dx.doi.org/10.1111/jedm.1202800578nas a2200133 4500008004100000245007700041210006900118260011100187100001000298700001400308700001400322700001100336856009700347 2009 eng d00aLimiting item exposure for target difficulty ranges in a high-stakes CAT0 aLimiting item exposure for target difficulty ranges in a highsta aD. J. Weiss (Ed.), Proceedings of the 2009 GMAC Conference on Computerized Adaptive Testing. {PDF File, 1.1 aLi, X1 aBecker, K1 aGorham, J1 aWoo, A uhttp://www.iacat.org/content/limiting-item-exposure-target-difficulty-ranges-high-stakes-cat01919nas a2200169 4500008004100000020002200041245011800063210006900181250001500250260000800265300001100273490000700284520130600291100001201597700001401609856012601623 2009 eng d a0962-9343 (Print)00aLogistics of collecting patient-reported outcomes (PROs) in clinical practice: an overview and practical examples0 aLogistics of collecting patientreported outcomes PROs in clinica a2009/01/20 cFeb a125-360 v183 aPURPOSE: Interest in collecting patient-reported outcomes (PROs), such as health-related quality of life (HRQOL), health status reports, and patient satisfaction is on the rise and practical aspects of collecting PROs in clinical practice are becoming more important. The purpose of this paper is to draw the attention to a number of issues relevant for a successful integration of PRO measures into the daily work flow of busy clinical settings. METHODS: The paper summarizes the results from a breakout session held at an ISOQOL special topic conference for PRO measures in clinical practice in 2007. RESULTS: Different methodologies of collecting PROs are discussed, and the support needed for each methodology is highlighted. The discussion is illustrated by practical real-life examples from early adaptors who administered paper-pencil, or electronic PRO assessments (ePRO) for more than a decade. The paper also reports about new experiences with more recent technological developments, such as SmartPens and Computer Adaptive Tests (CATs) in daily practice. CONCLUSIONS: Methodological and logistical issues determine the resources needed for a successful integration of PRO measures into daily work flow procedures and influence significantly the usefulness of PRO data for clinical practice.1 aRose, M1 aBezjak, A uhttp://www.iacat.org/content/logistics-collecting-patient-reported-outcomes-pros-clinical-practice-overview-and-practical03232nas a2200397 4500008004100000020002700041245014200068210006900210250001500279260001100294300001200305490000700317520193600324653002702260653003002287653001002317653000902327653002202336653003602358653001602394653002402410653004402434653001102478653001602489653002602505653003002531653003002561653003102591100001302622700001402635700001502649700001402664700001702678700001502695856012402710 2008 eng d a1528-1159 (Electronic)00aLetting the CAT out of the bag: Comparing computer adaptive tests and an 11-item short form of the Roland-Morris Disability Questionnaire0 aLetting the CAT out of the bag Comparing computer adaptive tests a2008/05/23 cMay 20 a1378-830 v333 aSTUDY DESIGN: A post hoc simulation of a computer adaptive administration of the items of a modified version of the Roland-Morris Disability Questionnaire. OBJECTIVE: To evaluate the effectiveness of adaptive administration of back pain-related disability items compared with a fixed 11-item short form. SUMMARY OF BACKGROUND DATA: Short form versions of the Roland-Morris Disability Questionnaire have been developed. An alternative to paper-and-pencil short forms is to administer items adaptively so that items are presented based on a person's responses to previous items. Theoretically, this allows precise estimation of back pain disability with administration of only a few items. MATERIALS AND METHODS: Data were gathered from 2 previously conducted studies of persons with back pain. An item response theory model was used to calibrate scores based on all items, items of a paper-and-pencil short form, and several computer adaptive tests (CATs). RESULTS: Correlations between each CAT condition and scores based on a 23-item version of the Roland-Morris Disability Questionnaire ranged from 0.93 to 0.98. Compared with an 11-item short form, an 11-item CAT produced scores that were significantly more highly correlated with scores based on the 23-item scale. CATs with even fewer items also produced scores that were highly correlated with scores based on all items. For example, scores from a 5-item CAT had a correlation of 0.93 with full scale scores. Seven- and 9-item CATs correlated at 0.95 and 0.97, respectively. A CAT with a standard-error-based stopping rule produced scores that correlated at 0.95 with full scale scores. CONCLUSION: A CAT-based back pain-related disability measure may be a valuable tool for use in clinical and research contexts. Use of CAT for other common measures in back pain research, such as other functional scales or measures of psychological distress, may offer similar advantages.10a*Disability Evaluation10a*Health Status Indicators10aAdult10aAged10aAged, 80 and over10aBack Pain/*diagnosis/psychology10aCalibration10aComputer Simulation10aDiagnosis, Computer-Assisted/*standards10aHumans10aMiddle Aged10aModels, Psychological10aPredictive Value of Tests10aQuestionnaires/*standards10aReproducibility of Results1 aCook, KF1 aChoi, S W1 aCrane, P K1 aDeyo, R A1 aJohnson, K L1 aAmtmann, D uhttp://www.iacat.org/content/letting-cat-out-bag-comparing-computer-adaptive-tests-and-11-item-short-form-roland-morris01442nas a2200145 4500008003900000022001400039245007100053210006900124300001400193490000700207520098500214100002001199700002201219856005501241 2008 d a1745-398400aLocal Dependence in an Operational CAT: Diagnosis and Implications0 aLocal Dependence in an Operational CAT Diagnosis and Implication a201–2230 v453 aThe accuracy of CAT scores can be negatively affected by local dependence if the CAT utilizes parameters that are misspecified due to the presence of local dependence and/or fails to control for local dependence in responses during the administration stage. This article evaluates the existence and effect of local dependence in a test of Mathematics Knowledge. Diagnostic tools were first used to evaluate the existence of local dependence in items that were calibrated under a 3PL model. A simulation study was then used to evaluate the effect of local dependence on the precision of examinee CAT scores when the 3PL model was used for selection and scoring. The diagnostic evaluation showed strong evidence for local dependence. The simulation suggested that local dependence in parameters had a minimal effect on CAT score precision, while local dependence in responses had a substantial effect on score precision, depending on the degree of local dependence present.
1 aPommerich, Mary1 aSegall, Daniel, O uhttp://dx.doi.org/10.1111/j.1745-3984.2008.00061.x02300nas a2200205 4500008004100000245009000041210007100131300000900202490000700211520161900218653002001837653001601857653001801873653002201891653001601913653001501929653001801944100001401962856011801976 2005 eng d00aLa Validez desde una óptica psicométrica [Validity from a psychometric perspective]0 aLa Validez desde una óptica psicométrica Validity from a psychom a9-200 v133 aEl estudio de la validez constituye el eje central de los análisis psicométricos de los instrumentos de medida. En esta comunicación se traza una breve nota histórica de los distintos modos de concebir la validez a lo largo de los tiempos, se comentan las líneas actuales, y se tratan de vislumbrar posibles vías futuras, teniendo en cuenta el impacto que las nuevas tecnologías informáticas están ejerciendo sobre los propios instrumentos de medida en Psicología y Educación. Cuestiones como los nuevos formatos multimedia de los ítems, la evaluación a distancia, el uso intercultural de las pruebas, las consecuencias de su uso, o los tests adaptativos informatizados, reclaman nuevas formas de evaluar y conceptualizar la validez. También se analizan críticamente algunos planteamientos recientes sobre el concepto de validez. The study of validity constitutes a central axis of psychometric analyses of measurement instruments. This paper presents a historical sketch of different modes of conceiving validity, with commentary on current views, and it attempts to predict future lines of research by considering the impact of new computerized technologies on measurement instruments in psychology and education. Factors such as the new multimedia format of items, distance assessment, the intercultural use of tests, the consequences of the latter, or the development of computerized adaptive tests demand new ways of conceiving and evaluating validity. Some recent thoughts about the concept of validity are also critically analyzed. (PsycINFO Database Record (c) 2005 APA ) (journal abstract)10aFactor Analysis10aMeasurement10aPsychometrics10aScaling (Testing)10aStatistical10aTechnology10aTest Validity1 aMuñiz, J uhttp://www.iacat.org/content/la-validez-desde-una-%C3%B3ptica-psicom%C3%A9trica-validity-psychometric-perspective00717nas a2200205 4500008004100000020002200041245010800063210006900171260003300240300000900273490000900282100002400291700002300315700002700338700002900365700002100394700002300415700002300438856005000461 2004 eng d a978-3-540-22948-300aA Learning Environment for English for Academic Purposes Based on Adaptive Tests and Task-Based Systems0 aLearning Environment for English for Academic Purposes Based on bSpringer Berlin / Heidelberg a1-110 v32201 aGonçalves, Jean, P1 aAluisio, Sandra, M1 aOliveira, Leandro, H M1 aOliveira Jr., Osvaldo, N1 aLester, James, C1 aVicari, Rosa Maria1 aParaguaçu, Fábio uhttp://dx.doi.org/10.1007/978-3-540-30139-4_100585nas a2200133 4500008004100000245010800041210006900149260003200218100002400250700001700274700001800291700001600309856012600325 2004 eng d00aA learning environment for english for academic purposes based on adaptive tests and task-based systems0 alearning environment for english for academic purposes based on b Springer Berlin Heidelberg1 aPITON-GONÇALVES, J1 aALUISIO, S M1 aMENDONCA, L H1 aNOVAES, O O uhttp://www.iacat.org/content/learning-environment-english-academic-purposes-based-adaptive-tests-and-task-based-systems-000466nas a2200085 4500008004400000245011400044210007100158100001500229856013600244 2002 Frendh 00aLa simulation d’un test adaptatif basé sur le modèle de Rasch [Simulation of a Rasch-based adaptive test]0 aLa simulation d un test adaptatif basé sur le modèle de Rasch Si1 aRaîche, G uhttp://www.iacat.org/content/la-simulation-d%E2%80%99un-test-adaptatif-bas%C3%A9-sur-le-mod%C3%A8le-de-rasch-simulation-rasch-based00474nas a2200097 4500008004100000245004400041210004200085260016300127100001500290856007100305 2002 eng d00aLe testing adaptatif [Adaptive testing]0 aLe testing adaptatif Adaptive testing aD. R. Bertrand and J.G. Blais (Eds) : Les théories modernes de la mesure [Modern theories of measurement]. Sainte-Foy: Presses de l’Université du Québec.1 aRaîche, G uhttp://www.iacat.org/content/le-testing-adaptatif-adaptive-testing00659nam a2200097 4500008004100000245026000041210006900301260005400370100001300424856012400437 2000 eng d00aLa distribution dchantillonnage en testing adaptatif en fonction de deux rgles darrt : selon lerreur type et selon le nombre ditems administrs [Sampling distribution of the proficiency estimate in computerized adaptive testing according to two stopping...0 aLa distribution dchantillonnage en testing adaptatif en fonction aDoctoral thesis, Montreal: University of Montreal1 aRache, G uhttp://www.iacat.org/content/la-distribution-dchantillonnage-en-testing-adaptatif-en-fonction-de-deux-rgles-darrt-selon02718nas a2200169 4500008004100000245011500041210006900156300000900225490000700234520208500241653001302326653002802339653002602367653001602393100001602409856012302425 2000 eng d00aLagrangian relaxation for constrained curve-fitting with binary variables: Applications in educational testing0 aLagrangian relaxation for constrained curvefitting with binary v a10630 v613 aThis dissertation offers a mathematical programming approach to curve fitting with binary variables. Various Lagrangian Relaxation (LR) techniques are applied to constrained curve fitting. Applications in educational testing with respect to test assembly are utilized. In particular, techniques are applied to both static exams (i.e. conventional paper-and-pencil (P&P)) and adaptive exams (i.e. a hybrid computerized adaptive test (CAT) called a multiple-forms structure (MFS)). This dissertation focuses on the development of mathematical models to represent these test assembly problems as constrained curve-fitting problems with binary variables and solution techniques for the test development. Mathematical programming techniques are used to generate parallel test forms with item characteristics based on item response theory. A binary variable is used to represent whether or not an item is present on a form. The problem of creating a test form is modeled as a network flow problem with additional constraints. In order to meet the target information and the test characteristic curves, a Lagrangian relaxation heuristic is applied to the problem. The Lagrangian approach works by multiplying the constraint by a "Lagrange multiplier" and adding it to the objective. By systematically varying the multiplier, the test form curves approach the targets. This dissertation explores modifications to Lagrangian Relaxation as it is applied to the classical paper-and-pencil exams. For the P&P exams, LR techniques are also utilized to include additional practical constraints to the network problem, which limit the item selection. An MFS is a type of a computerized adaptive test. It is a hybrid of a standard CAT and a P&P exam. The concept of an MFS will be introduced in this dissertation, as well as, the application of LR as it is applied to constructing parallel MFSs. The approach is applied to the Law School Admission Test for the assembly of the conventional P&P test as well as an experimental computerized test using MFSs. (PsycINFO Database Record (c) 2005 APA )10aAnalysis10aEducational Measurement10aMathematical Modeling10aStatistical1 aKoppel, N B uhttp://www.iacat.org/content/lagrangian-relaxation-constrained-curve-fitting-binary-variables-applications-educational00421nam a2200097 4500008004100000245007600041210006900117260002000186100001500206856010200221 2000 eng d00aLearning Potential Computerised Adaptive Test (LPCAT): Technical Manual0 aLearning Potential Computerised Adaptive Test LPCAT Technical Ma aPretoria: UNISA1 aDe Beer, M uhttp://www.iacat.org/content/learning-potential-computerised-adaptive-test-lpcat-technical-manual00414nam a2200097 4500008004100000245007300041210006900114260002000183100001500203856009800218 2000 eng d00aLearning Potential Computerised Adaptive Test (LPCAT): User's Manual0 aLearning Potential Computerised Adaptive Test LPCAT Users Manual aPretoria: UNISA1 aDe Beer, M uhttp://www.iacat.org/content/learning-potential-computerised-adaptive-test-lpcat-users-manual00555nas a2200133 4500008004100000245011800041210006900159300001000228490000700238100001700245700002100262700001500283856012300298 2000 eng d00aLimiting answer review and change on computerized adaptive vocabulary tests: Psychometric and attitudinal results0 aLimiting answer review and change on computerized adaptive vocab a21-380 v371 aVispoel, W P1 aHendrickson, A B1 aBleiler, T uhttp://www.iacat.org/content/limiting-answer-review-and-change-computerized-adaptive-vocabulary-tests-psychometric-and00692nas a2200169 4500008004100000020001400041245015800055210006900213300001200282490000600294653003400300100001700334700001500351700001200366700001400378856013000392 2000 eng d a1575-910500aLos tests adaptativos informatizados en la frontera del siglo XXI: Una revisión [Computerized adaptive tests at the turn of the 21st century: A review]0 aLos tests adaptativos informatizados en la frontera del siglo XX a183-2160 v210acomputerized adaptive testing1 aHontangas, P1 aPonsoda, V1 aOlea, J1 aAbad, F J uhttp://www.iacat.org/content/los-tests-adaptativos-informatizados-en-la-frontera-del-siglo-xxi-una-revisi%C3%B3n-computerized00627nas a2200157 4500008004100000245011800041210006900159260002100228100001700249700001900266700001500285700001600300700001500316700001300331856012500344 1999 eng d00aLimiting answer review and change on computerized adaptive vocabulary tests: Psychometric and attitudinal results0 aLimiting answer review and change on computerized adaptive vocab aMontreal, Canada1 aVispoel, W P1 aHendrickson, A1 aBleiler, T1 aWidiatmo, H1 aShrairi, S1 aIhrig, D uhttp://www.iacat.org/content/limiting-answer-review-and-change-computerized-adaptive-vocabulary-tests-psychometric-and-000542nas a2200109 4500008004100000245011600041210006900157260004600226100001600272700001800288856012600306 1997 eng d00aLinking scores for computer-adaptive and paper-and-pencil administrations of the SAT (Research Report No 97-12)0 aLinking scores for computeradaptive and paperandpencil administr aPrinceton NJ: Educational Testing Service1 aLawrence, I1 aFeigenbaum, M uhttp://www.iacat.org/content/linking-scores-computer-adaptive-and-paper-and-pencil-administrations-sat-research-report-no00749nas a2200097 4500008004100000245025100041210007100292260013800363100001500501856013500516 1994 eng d00aLa simulation de modèle sur ordinateur en tant que méthode de recherche : le cas concret de l’étude de la distribution d’échantillonnage de l’estimateur du niveau d’habileté en testing adaptatif en fonction de deux règles d’arrêt0 aLa simulation de modèle sur ordinateur en tant que méthode de re aActes du 6e colloque de l‘Association pour la recherche au collégial. Montréal : Association pour la recherche au collégial, ARC1 aRaîche, G uhttp://www.iacat.org/content/la-simulation-de-mod%C3%A8le-sur-ordinateur-en-tant-que-m%C3%A9thode-de-recherche-le-cas-concret-de-l00500nas a2200097 4500008004100000245012400041210007200165100001500237700001500252856013500267 1994 eng d00aL'évaluation nationale individualisée et assistée par ordinateur [Large scale assessment: Tailored and computerized]0 aLévaluation nationale individualisée et assistée par ordinateur 1 aRaîche, G1 aBéland, A uhttp://www.iacat.org/content/l%C3%A9valuation-nationale-individualis%C3%A9e-et-assist%C3%A9e-par-ordinateur-large-scale-assessment00485nas a2200097 4500008004100000245004300041210004300084260017300127100001500300856007200315 1993 eng d00aLes tests adaptatifs en langue seconde0 aLes tests adaptatifs en langue seconde aCommunication lors de la 16e session d’étude de l’ADMÉÉ à Laval. Montréal: Association pour le développement de la mesure et de l’évaluation en éducation.1 aLaurier, M uhttp://www.iacat.org/content/les-tests-adaptatifs-en-langue-seconde00549nas a2200121 4500008004100000245014600041210006900187300001200256490000700268100001400275700001500289856012300304 1993 eng d00aLinking the standard and advanced forms of the Ravens Progressive Matrices in both the paper-and-pencil and computer-adaptive-testing formats0 aLinking the standard and advanced forms of the Ravens Progressiv a905-9250 v531 aStyles, I1 aAndrich, D uhttp://www.iacat.org/content/linking-standard-and-advanced-forms-ravens-progressive-matrices-both-paper-and-pencil-and00510nas a2200109 4500008004100000245008000041210006900121260007600190100001800266700001200284856010400296 1992 eng d00aThe Language Training Division's computer adaptive reading proficiency test0 aLanguage Training Divisions computer adaptive reading proficienc aProvo, UT: Language Training Division, Office of Training and Education1 aJanczewski, D1 aLowe, P uhttp://www.iacat.org/content/language-training-divisions-computer-adaptive-reading-proficiency-test00577nas a2200109 4500008004400000245016300044210007200207490001400279100001300293700001600306856014500322 1992 Frendh 00aLe testing adaptatif avec interprétation critérielle, une expérience de praticabilité du TAM pour l’évaluation sommative des apprentissages au Québec.0 aLe testing adaptatif avec interprétation critérielle une expérie0 v15-1 et 21 aAuger, R1 aSeguin, S P uhttp://www.iacat.org/content/le-testing-adaptatif-avec-interpr%C3%A9tation-crit%C3%A9rielle-une-exp%C3%A9rience-de-praticabilit%C3%A9-du-tam00405nas a2200121 4500008003900000245005800039210005800097300001000155490000700165100001500172700001300187856008300200 1985 d00aLatent structure and item sampling models for testing0 aLatent structure and item sampling models for testing a19-480 v361 aTraub, R E1 aLam, Y R uhttp://www.iacat.org/content/latent-structure-and-item-sampling-models-testing00405nas a2200097 4500008004100000245007100041210006900112100001600181700001300197856009700210 1982 eng d00aLegal and political considerations in large-scale adaptive testing0 aLegal and political considerations in largescale adaptive testin1 aWaters, B K1 aLee, G C uhttp://www.iacat.org/content/legal-and-political-considerations-large-scale-adaptive-testing00554nas a2200109 4500008004100000245011600041210006900157260006600226100001400292700001700306856012100323 1978 eng d00aA live tailored testing comparison study of the one- and three-parameter logistic models (Research Report 78-1)0 alive tailored testing comparison study of the one and threeparam aColumbia MO: University of Missouri, Department of Psychology1 aKoch, W J1 aReckase, M D uhttp://www.iacat.org/content/live-tailored-testing-comparison-study-one-and-three-parameter-logistic-models-research00580nas a2200109 4500008003900000245006500039210006200104260018500166100001500351700001600366856008800382 1977 d00aA Low-Cost Terminal Usable for Computerized Adaptive Testing0 aLowCost Terminal Usable for Computerized Adaptive Testing aD. J. Weiss (Ed.), Proceedings of the 1977 Computerized Adaptive Testing Conference. Minneapolis MN: University of Minnesota, Department of Psychology, Psychometric Methods Program1 aLamos, J P1 aWaters, B K uhttp://www.iacat.org/content/low-cost-terminal-usable-computerized-adaptive-testing00396nas a2200121 4500008004400000245005300044210005300097300000900150490000700159100001300166700001300179856008200192 1908 Frendh 00aLe development de lintelligence chez les enfants0 aLe development de lintelligence chez les enfants a1-940 v141 aBinet, A1 aSimon, T uhttp://www.iacat.org/content/le-development-de-lintelligence-chez-les-enfants