TY - JOUR T1 - The maximum priority index method for severely constrained item selection in computerized adaptive testing JF - British Journal of Mathematical and Statistical Psychology Y1 - 2009 A1 - Cheng, Y A1 - Chang, Hua-Hua KW - Aptitude Tests/*statistics & numerical data KW - Diagnosis, Computer-Assisted/*statistics & numerical data KW - Educational Measurement/*statistics & numerical data KW - Humans KW - Mathematical Computing KW - Models, Statistical KW - Personality Tests/*statistics & numerical data KW - Psychometrics/*statistics & numerical data KW - Reproducibility of Results KW - Software AB - This paper introduces a new heuristic approach, the maximum priority index (MPI) method, for severely constrained item selection in computerized adaptive testing. Our simulation study shows that it is able to accommodate various non-statistical constraints simultaneously, such as content balancing, exposure control, answer key balancing, and so on. Compared with the weighted deviation modelling method, it leads to fewer constraint violations and better exposure control while maintaining the same level of measurement precision. VL - 62 SN - 0007-1102 (Print)0007-1102 (Linking) N1 - Cheng, YingChang, Hua-HuaResearch Support, Non-U.S. Gov'tEnglandThe British journal of mathematical and statistical psychologyBr J Math Stat Psychol. 2009 May;62(Pt 2):369-83. Epub 2008 Jun 2. ER - TY - JOUR T1 - Are we ready for computerized adaptive testing? JF - Psychiatric Services Y1 - 2008 A1 - Unick, G. J. A1 - Shumway, M. A1 - Hargreaves, W. KW - *Attitude of Health Personnel KW - *Diagnosis, Computer-Assisted/instrumentation KW - Humans KW - Mental Disorders/*diagnosis KW - Software VL - 59 SN - 1075-2730 (Print)1075-2730 (Linking) N1 - Unick, George JShumway, MarthaHargreaves, WilliamCommentUnited StatesPsychiatric services (Washington, D.C.)Psychiatr Serv. 2008 Apr;59(4):369. ER - TY - JOUR T1 - The NAPLEX: evolution, purpose, scope, and educational implications JF - American Journal of Pharmaceutical Education Y1 - 2008 A1 - Newton, D. W. A1 - Boyle, M. A1 - Catizone, C. A. KW - *Educational Measurement KW - Education, Pharmacy/*standards KW - History, 20th Century KW - History, 21st Century KW - Humans KW - Licensure, Pharmacy/history/*legislation & jurisprudence KW - North America KW - Pharmacists/*legislation & jurisprudence KW - Software AB - Since 2004, passing the North American Pharmacist Licensure Examination (NAPLEX) has been a requirement for earning initial pharmacy licensure in all 50 United States. The creation and evolution from 1952-2005 of the particular pharmacy competency testing areas and quantities of questions are described for the former paper-and-pencil National Association of Boards of Pharmacy Licensure Examination (NABPLEX) and the current candidate-specific computer adaptive NAPLEX pharmacy licensure examinations. A 40% increase in the weighting of NAPLEX Blueprint Area 2 in May 2005, compared to that in the preceding 1997-2005 Blueprint, has implications for candidates' NAPLEX performance and associated curricular content and instruction. New pharmacy graduates' scores on the NAPLEX are neither intended nor validated to serve as a criterion for assessing or judging the quality or effectiveness of pharmacy curricula and instruction. The newest cycle of NAPLEX Blueprint revision, a continual process to ensure representation of nationwide contemporary practice, began in early 2008. It may take up to 2 years, including surveying several thousand national pharmacists, to complete. VL - 72 SN - 1553-6467 (Electronic)0002-9459 (Linking) N1 - Newton, David WBoyle, MariaCatizone, Carmen AHistorical ArticleUnited StatesAmerican journal of pharmaceutical educationAm J Pharm Educ. 2008 Apr 15;72(2):33. U2 - 2384208 ER - TY - JOUR T1 - Predicting item exposure parameters in computerized adaptive testing JF - British Journal of Mathematical and Statistical Psychology Y1 - 2008 A1 - Chen, S-Y. A1 - Doong, S. H. KW - *Algorithms KW - *Artificial Intelligence KW - Aptitude Tests/*statistics & numerical data KW - Diagnosis, Computer-Assisted/*statistics & numerical data KW - Humans KW - Models, Statistical KW - Psychometrics/statistics & numerical data KW - Reproducibility of Results KW - Software AB - The purpose of this study is to find a formula that describes the relationship between item exposure parameters and item parameters in computerized adaptive tests by using genetic programming (GP) - a biologically inspired artificial intelligence technique. Based on the formula, item exposure parameters for new parallel item pools can be predicted without conducting additional iterative simulations. Results show that an interesting formula between item exposure parameters and item parameters in a pool can be found by using GP. The item exposure parameters predicted based on the found formula were close to those observed from the Sympson and Hetter (1985) procedure and performed well in controlling item exposure rates. Similar results were observed for the Stocking and Lewis (1998) multinomial model for item selection and the Sympson and Hetter procedure with content balancing. The proposed GP approach has provided a knowledge-based solution for finding item exposure parameters. VL - 61 SN - 0007-1102 (Print)0007-1102 (Linking) N1 - Chen, Shu-YingDoong, Shing-HwangResearch Support, Non-U.S. Gov'tEnglandThe British journal of mathematical and statistical psychologyBr J Math Stat Psychol. 2008 May;61(Pt 1):75-91. ER - TY - JOUR T1 - Improving patient reported outcomes using item response theory and computerized adaptive testing JF - Journal of Rheumatology Y1 - 2007 A1 - Chakravarty, E. F. A1 - Bjorner, J. B. A1 - Fries, J.F. KW - *Rheumatic Diseases/physiopathology/psychology KW - Clinical Trials KW - Data Interpretation, Statistical KW - Disability Evaluation KW - Health Surveys KW - Humans KW - International Cooperation KW - Outcome Assessment (Health Care)/*methods KW - Patient Participation/*methods KW - Research Design/*trends KW - Software AB - OBJECTIVE: Patient reported outcomes (PRO) are considered central outcome measures for both clinical trials and observational studies in rheumatology. More sophisticated statistical models, including item response theory (IRT) and computerized adaptive testing (CAT), will enable critical evaluation and reconstruction of currently utilized PRO instruments to improve measurement precision while reducing item burden on the individual patient. METHODS: We developed a domain hierarchy encompassing the latent trait of physical function/disability from the more general to most specific. Items collected from 165 English-language instruments were evaluated by a structured process including trained raters, modified Delphi expert consensus, and then patient evaluation. Each item in the refined data bank will undergo extensive analysis using IRT to evaluate response functions and measurement precision. CAT will allow for real-time questionnaires of potentially smaller numbers of questions tailored directly to each individual's level of physical function. RESULTS: Physical function/disability domain comprises 4 subdomains: upper extremity, trunk, lower extremity, and complex activities. Expert and patient review led to consensus favoring use of present-tense "capability" questions using a 4- or 5-item Likert response construct over past-tense "performance"items. Floor and ceiling effects, attribution of disability, and standardization of response categories were also addressed. CONCLUSION: By applying statistical techniques of IRT through use of CAT, existing PRO instruments may be improved to reduce questionnaire burden on the individual patients while increasing measurement precision that may ultimately lead to reduced sample size requirements for costly clinical trials. VL - 34 SN - 0315-162X (Print) N1 - Chakravarty, Eliza FBjorner, Jakob BFries, James FAr052158/ar/niamsConsensus Development ConferenceResearch Support, N.I.H., ExtramuralCanadaThe Journal of rheumatologyJ Rheumatol. 2007 Jun;34(6):1426-31. ER - TY - JOUR T1 - Computer adaptive testing improved accuracy and precision of scores over random item selection in a physical functioning item bank JF - Journal of Clinical Epidemiology Y1 - 2006 A1 - Haley, S. M. A1 - Ni, P. A1 - Hambleton, R. K. A1 - Slavin, M. D. A1 - Jette, A. M. KW - *Recovery of Function KW - Activities of Daily Living KW - Adolescent KW - Adult KW - Aged KW - Aged, 80 and over KW - Confidence Intervals KW - Factor Analysis, Statistical KW - Female KW - Humans KW - Male KW - Middle Aged KW - Outcome Assessment (Health Care)/*methods KW - Rehabilitation/*standards KW - Reproducibility of Results KW - Software AB - BACKGROUND AND OBJECTIVE: Measuring physical functioning (PF) within and across postacute settings is critical for monitoring outcomes of rehabilitation; however, most current instruments lack sufficient breadth and feasibility for widespread use. Computer adaptive testing (CAT), in which item selection is tailored to the individual patient, holds promise for reducing response burden, yet maintaining measurement precision. We calibrated a PF item bank via item response theory (IRT), administered items with a post hoc CAT design, and determined whether CAT would improve accuracy and precision of score estimates over random item selection. METHODS: 1,041 adults were interviewed during postacute care rehabilitation episodes in either hospital or community settings. Responses for 124 PF items were calibrated using IRT methods to create a PF item bank. We examined the accuracy and precision of CAT-based scores compared to a random selection of items. RESULTS: CAT-based scores had higher correlations with the IRT-criterion scores, especially with short tests, and resulted in narrower confidence intervals than scores based on a random selection of items; gains, as expected, were especially large for low and high performing adults. CONCLUSION: The CAT design may have important precision and efficiency advantages for point-of-care functional assessment in rehabilitation practice settings. VL - 59 SN - 0895-4356 (Print) N1 - Haley, Stephen MNi, PengshengHambleton, Ronald KSlavin, Mary DJette, Alan MK02 hd45354-01/hd/nichdR01 hd043568/hd/nichdComparative StudyResearch Support, N.I.H., ExtramuralResearch Support, U.S. Gov't, Non-P.H.S.EnglandJournal of clinical epidemiologyJ Clin Epidemiol. 2006 Nov;59(11):1174-82. Epub 2006 Jul 11. ER - TY - JOUR T1 - Computerized adaptive measurement of depression: A simulation study JF - BMC Psychiatry Y1 - 2004 A1 - Gardner, W. A1 - Shear, K. A1 - Kelleher, K. J. A1 - Pajer, K. A. A1 - Mammen, O. A1 - Buysse, D. A1 - Frank, E. KW - *Computer Simulation KW - Adult KW - Algorithms KW - Area Under Curve KW - Comparative Study KW - Depressive Disorder/*diagnosis/epidemiology/psychology KW - Diagnosis, Computer-Assisted/*methods/statistics & numerical data KW - Factor Analysis, Statistical KW - Female KW - Humans KW - Internet KW - Male KW - Mass Screening/methods KW - Patient Selection KW - Personality Inventory/*statistics & numerical data KW - Pilot Projects KW - Prevalence KW - Psychiatric Status Rating Scales/*statistics & numerical data KW - Psychometrics KW - Research Support, Non-U.S. Gov't KW - Research Support, U.S. Gov't, P.H.S. KW - Severity of Illness Index KW - Software AB - Background: Efficient, accurate instruments for measuring depression are increasingly importantin clinical practice. We developed a computerized adaptive version of the Beck DepressionInventory (BDI). We examined its efficiency and its usefulness in identifying Major DepressiveEpisodes (MDE) and in measuring depression severity.Methods: Subjects were 744 participants in research studies in which each subject completed boththe BDI and the SCID. In addition, 285 patients completed the Hamilton Depression Rating Scale.Results: The adaptive BDI had an AUC as an indicator of a SCID diagnosis of MDE of 88%,equivalent to the full BDI. The adaptive BDI asked fewer questions than the full BDI (5.6 versus 21items). The adaptive latent depression score correlated r = .92 with the BDI total score and thelatent depression score correlated more highly with the Hamilton (r = .74) than the BDI total scoredid (r = .70).Conclusions: Adaptive testing for depression may provide greatly increased efficiency withoutloss of accuracy in identifying MDE or in measuring depression severity. VL - 4 ER - TY - JOUR T1 - Calibration of an item pool for assessing the burden of headaches: an application of item response theory to the Headache Impact Test (HIT) JF - Quality of Life Research Y1 - 2003 A1 - Bjorner, J. B. A1 - Kosinski, M. A1 - Ware, J. E., Jr. KW - *Cost of Illness KW - *Decision Support Techniques KW - *Sickness Impact Profile KW - Adolescent KW - Adult KW - Aged KW - Comparative Study KW - Disability Evaluation KW - Factor Analysis, Statistical KW - Headache/*psychology KW - Health Surveys KW - Human KW - Longitudinal Studies KW - Middle Aged KW - Migraine/psychology KW - Models, Psychological KW - Psychometrics/*methods KW - Quality of Life/*psychology KW - Software KW - Support, Non-U.S. Gov't AB - BACKGROUND: Measurement of headache impact is important in clinical trials, case detection, and the clinical monitoring of patients. Computerized adaptive testing (CAT) of headache impact has potential advantages over traditional fixed-length tests in terms of precision, relevance, real-time quality control and flexibility. OBJECTIVE: To develop an item pool that can be used for a computerized adaptive test of headache impact. METHODS: We analyzed responses to four well-known tests of headache impact from a population-based sample of recent headache sufferers (n = 1016). We used confirmatory factor analysis for categorical data and analyses based on item response theory (IRT). RESULTS: In factor analyses, we found very high correlations between the factors hypothesized by the original test constructers, both within and between the original questionnaires. These results suggest that a single score of headache impact is sufficient. We established a pool of 47 items which fitted the generalized partial credit IRT model. By simulating a computerized adaptive health test we showed that an adaptive test of only five items had a very high concordance with the score based on all items and that different worst-case item selection scenarios did not lead to bias. CONCLUSION: We have established a headache impact item pool that can be used in CAT of headache impact. VL - 12 N1 - 0962-9343Journal Article ER - TY - JOUR T1 - NCLEX-RN performance: predicting success on the computerized examination JF - Journal of Professional Nursing Y1 - 2001 A1 - Beeman, P. B. A1 - Waterhouse, J. K. KW - *Education, Nursing, Baccalaureate KW - *Educational Measurement KW - *Licensure KW - Adult KW - Female KW - Humans KW - Male KW - Predictive Value of Tests KW - Software AB - Since the adoption of the Computerized Adaptive Testing (CAT) format of the National Certification Licensure Examination for Registered Nurses (NCLEX-RN), no studies have been reported in the literature on predictors of successful performance by baccalaureate nursing graduates on the licensure examination. In this study, a discriminant analysis was used to identify which of 21 variables can be significant predictors of success on the CAT NCLEX-RN. The convenience sample consisted of 289 individuals who graduated from a baccalaureate nursing program between 1995 and 1998. Seven significant predictor variables were identified. The total number of C+ or lower grades earned in nursing theory courses was the best predictor, followed by grades in several individual nursing courses. More than 93 per cent of graduates were correctly classified. Ninety-four per cent of NCLEX "passes" were correctly classified, as were 92 per cent of NCLEX failures. This degree of accuracy in classifying CAT NCLEX-RN failures represents a marked improvement over results reported in previous studies of licensure examinations, and suggests the discriminant function will be helpful in identifying future students in danger of failure. J Prof Nurs 17:158-165, 2001. VL - 17 N1 - 8755-7223Journal Article ER -