WORKSHOP 1: Introduction to developing a CAT
Pre-requisite/s: Basic understanding of item response theory.
Abstract: New to CAT? This workshop is intended for researchers who are familiar with classical psychometrics and educational / psychological assessment and are interested in leveraging the benefits of item response theory (IRT) and adaptive testing. The goal is to provide a theoretical and practical background to allow practitioners to understand the advantages, begin developing a CAT program, and find deeper resources.
The first portion of the workshop will focus on item response theory, which serves as the backbone of the vast majority of CATs. We will discuss the drawbacks to classical test theory, the development of IRT, and comparisons of IRT models. We will also present information on how to link multiple test forms together, evaluate fit, and how to use IRT calibration software. The second portion will present the components and algorithms of a CAT, including development of an item bank, pilot testing and calibration, starting rule, item selection rule, scoring method, and termination criterion. Advanced / optional algorithms such as item exposure constraints and content balancing will also be discussed. A follow-up discussion will also provide a five-step process for evaluating the feasibility of CAT and developing a real CAT assessment, with a focus on validity documentation, as well as a conversation on practical issues such as item bank management and CAT maintenance.
Bio: John Barnard is the founder and Executive Director of EPEC Pty Ltd, a private company in Melbourne, Australia which specialises in psychometrics and online assessment. He has extensive experience in assessment, from pioneering the implementation of IRT in South Africa and publishing CATs for selection of students in the 1980s before migrating to Australia in 1996, where he has been active in numerous national and international projects.
John holds three doctorates and dual appointment as professor. He is a full member of a number of professional organisations, the latest as a founding member of IACAT, elected as Vice President in 2014 and becoming President in 2015. John is also a member of the International Assessments Joint National Advisory Committee (IAJNAC), a consulting editor of JCAT and a member of the International Editorial Board of the SA Journal of Science and Technology. His most recent research in online diagnostic testing is based on a new measurement paradigm, Option Probability Theory (OPT), which he has been developing over the past decade.
Bio: Nathan Thompson is the Vice President of Client Services and Psychometrics at Assessment Systems Corporation, a leading provider of technology and psychometric solutions to the testing and assessment industry. He has extensive experience in psychometrics and test development, having worked in this role at both a certifying agency and two testing services providers, as well as in management of the business of testing. He oversees consulting and business development activities at ASC, but is primarily interested in the development of software tools which make test development more efficient and defensible, having spearheaded the development of software such as Iteman, Xcalibre and FastTest.
He is a founding member and Membership Director for the International Association for Computerized Adaptive Testing (IACAT), as well as serving on the annual conference committee. Dr. Thompson received a Ph.D. in Psychometric Methods from the University of Minnesota, with a minor in Industrial/Organizational Psychology. He also holds a B.A. from Luther College in Decorah, IA, with a triple major in Latin, Mathematics and Psychology.
WORKSHOP 2: CAT Simulations: How and Why?
WORKSHOP 3: The Shadow-Test Approach to Adaptive Testing
Bio: Michelle Barrett is the Director of Assessment Technology at Pacific Metrics Corporation. In this role, she is responsible for leading teams of psychometricians and software engineers to design and develop leading assessment technology solutions including test delivery, computerized adaptive testing and optimal test assembly. Previously, she worked in a similar role at CTB/McGraw-Hill, where her team's software performed psychometric analysis for multiple large-scale assessments and served adaptive tests to students using summative and formative assessment products. She has also worked as a Senior Consultant in the assessment division at the Colorado Department of Education. Dr Barrett has research interests in adaptive testing and response model parameter linking. She is also interested in exploring new and modified software development practices to speed the deployment of psychometric innovation into scalable, production level software. She holds a Bachelor's degree from Stanford University, a Master's degree from Harvard Graduate School of Education, a Graduate Certificate in large scale assessment from the University of Maryland, and received her PhD in research methods, data analysis and measurement from the University of Twente, Netherlands.
Bio: Wim van der Linden is Distinguished Scientist and Director of Research and Innovation at Pacific Metrics Corporation. He is also Professor Emeritus of Measurement and Data Analysis at the University of Twente, Netherlands. He has published widely about such topics as item response theory, adaptive testing, optimal test assembly, parameter linking, observed-score equating, and response-time modeling and applications. He has also authored Linear Models for Optimal Test Design (2005) and was the editor (with C. A. W. Glas) of Elements of Adaptive Testing (2010), and the three-volume Handbook of Item Response Theory (2016). Dr van der Linden is a past president of the Psychometric Society and NCME, a recipient of the ATP, NCME, and the Psychometric Society's career achievement awards, the AERA E. F. Lindquist award, and holds an honorary doctorate from the University of Umea, Sweden.
WORKSHOP 4: Multivariate CAT
Abstract: Many constructs measured by psychological testing can be conceptualized in terms of multi-unidimensional structures (such as personality traits or vocational interests) or hierarchical structures (such as cognitive ability). Advanced measurement theory, such as multidimensional item response theory (MIRT), is evolving rapidly to meet the needs of complicated psychological testing. Multidimensional CAT (MCAT), by coupling the strength of multidimensional IRT and adaptive testing, provides a promising way to measure psychological constructs with greater precision and reduced test length. The purpose of this workshop is to firstly, introduce the key building elements of multidimensional CAT, including the item selection methods, intermediate scoring methods, stopping rules, and online calibration methods. Furthermore, we aim to exemplify the applications of multidimensional CATs in education and health domains, and discuss the practical challenges.
Bio: Professor Chun Wang completed her Master of Science and PhD at the University of Illinois and is currently Assistant Professor at the University of Minnesota. To date she received 18 first class awards from the Psychometric Society, IACAT, NCME, Universities and Associations; 28 publications in Journals such as Psychometrika, Applied Psychological Measurement, Journal of Educational Measurement, International Journal of Testing, etc.; Chapters in books, a long list of invited presentations at prominent conferences; grants, reviewer for journals, editorial board member and affiliated with a number of professional organizations.