Workshop 1:  Computerized Multistage Adaptive Testing

Presenters: Duanli Yan, ETS, Alina von Davier, ETS, Chris Han, GMAC

Level: Intermediate

Pre-requisite: Basic understanding of item response theory and CAT.

Abstract
This workshop provides a general overview of a multistage test (MST) design and its important concepts and processes. The MST design is described, why it is needed, and how it differs from other test designs, such as linear test and computer adaptive test (CAT) designs.

The focus of the workshop will be on MST theory and applications including alternative scoring and estimation methods, classification tests, routing and scoring, linking, test security, as well as a live demonstration of MST software MSTGen (Han, 2013).  This workshop is based on the edited volume of Yan, von Davier, & Lewis (2014). The volume is structured to take the reader through all the operational aspects of the test, from the design to the post-administration analyzes. In particular, the chapters of Yan, Lewis, and von Davier; Lewis and Smith; Lee, Lewis, and von Davier; Haberman and von Davier; and Han and Kosinski are the basis for this workshop.

MSTGen (Han, 2013), a computer software tool for MST simulation studies, will be introduced by Han. MSTGen supports both conventional MST by routing mode and the new MST by shaping mode, and examples of both MST modes will be covered. The software is offered at no cost, and participants are encouraged to bring their own computers for a brief hands-on training.

Bios:  Duanli Yan is a Manager of Data Analysis and Computational Research for Automated Scoring group in the Research & Development division at ETS. She is also an Adjunct Professor at Rutgers University. She holds a Ph.D. in Psychometrics from Fordham University. Dr. Yan has been the statistical coordinator for the EXADEP™ test, and the TOEIC® Institutional programs, a Development Scientist for innovative research applications and a Psychometrician for several operational programs. Dr. Yan was the 2011 recipient of the ETS Presidential Award. She is a co-editor for volume Computerized Multistage Testing: Theory and Applications. She is also a co-author for book Bayesian Networks in Educational Assessment. Dr. Yan has been an invited symposium organizer and presenter for many conferences such as those of the National Council of Measurement in Education (NCME), International Association for Computerized Adaptive Testing (IACAT), and International Psychometrics Society (IMPS).

Alina A. von Davier is a Research Director and leader of the Center for Advanced Psychometrics at ETS. She also is an Adjunct Professor at Fordham University. Her Ph.D. in mathematics was earned at the Otto von Guericke University of Magdeburg, Germany, and her M.S. in mathematics is from the University of Bucharest, Romania. At ETS, von Davier is responsible for developing a team of experts and a psychometric research agenda in support of next generation of assessment. She also is responsible for fostering research relationships between ETS and the psychometric field, nationally and internationally. She edited a volume on test equating, Statistical Models for Test Equating, Scaling, and Linking, which has been selected as the 2013 winner of the Division D Significant Contribution to Educational Measurement and Research Methodology award. She is a co-author of a book on the kernel method of test equating and was a guest co-editor for a special issue on population invariance of linking functions for Applied Psychological Measurement. She authored a book on testing causal hypotheses and numerous papers published in psychometric journals. Most recently, von Davier co-edited a volume on multi-stage testing. Prior to joining ETS, she worked in Germany at the Universities of Trier, Magdeburg, Kiel, and Jena, and at the ZUMA in Mannheim, and in Romania, at the Institute of Psychology of the Romanian Academy.

Kyung (Chris) T. Han is a Senior Psychometrician and Director at the Graduate Management Admission Council. Han received his doctorate in Research and Evaluation Methods from the University of Massachusetts at Amherst. He received the Alicia Cascallar Award for an Outstanding Paper by an Early Career Scholar in 2012 and the Jason Millman Promising Measurement Scholar Award in 2013 from the National Council on Measurement in Education (NCME). He has presented and published numerous papers and book chapters on a variety of topics from item response theory, test validity, and test equating to adaptive testing. He also has developed several psychometric software programs including WinGen, IRTEQ, MSTGen, and SimulCAT, which are used widely in the measurement field.

 

Workshop 2: CAT simulations: How and Why to Perform These?

Presenters: Angela Verschoor, CITO, Theo Eggen, CITO

Level: Intermediate

Pre-requisite: Basic understanding of item response theory and CAT.

Abstract

In this workshop, the goals and usefulness of simulations for constructing CATs will be discussed. The measurement characteristics of a CAT can be studied and set before publishing it. Information can be collected by simulation studies that use the available IRT calibrated item bank and the proposed target population. The performance of proposed selection algorithms and constraints can be studied. Customized software will be demonstrated and distributed. Participants will practice using the software for some examples. Participants are invited to bring their own laptops for practicing (Windows®).).

Bios: Theo J.H.M. Eggen is Senior Research Scientist at the Psychometric Research Center of Cito and full professor of psychometrics at the University of Twente in the Netherlands Consultancy and research on educational and psychometrical issues of test development are his main activities. His specializations are: item response theory, quality of tests, (inter)national assessment, and computerized adaptive testing. He has major experience as a consultant in educational measurement at Cito, at the university and internationally.  He is author of research articles and chapters of textbooks. He is scientific director of the Research Center for Examinations and Certification (RCEC).

Angela Verschoor is Senior Researcher at CITO, the Netherlands. With a background in discrete optimization, her interest is the development and application of automated test assembly (ATA), optimal design and computerized adaptive testing (CAT). She has been responsible for the design of pretests for large-scale projects such as the Final Primary Education Test in the Netherlands. Other recent projects included the introduction of ATA and CAT in, amongst others, the Netherlands, Georgia, Russia, Kazakhstan, the Philippines and Switzerland.

Workshop 3: Introduction to Computerized Adaptive Testing

Presenters: Nathan Thompson, ASC

Level: Beginner

Pre-requisite: Basic understanding of item response theory.

Abstract:

This workshop provides an overview of the primary components and algorithms involved in CAT, including development of an item bank, calibrating with item response theory, starting rule, item selection rule, scoring method, and termination criterion. It will also provide a five-step process for evaluating the feasibility of CAT and developing a real CAT assessment, with a focus on validity documentation. The workshop is intended for researchers that are familiar with classical psychometrics and educational/psychological assessment but are new to CAT.

Bios: Nathan Thompson is the Vice President of Assessment Systems Corporation (ASC), a leading provider of technology and psychometric solutions to the testing and assessment industry. He oversees consulting and business development activities at ASC, but is primarily interested in the development of software tools that make test development more efficient and defensible. He has led the development of ASC’s CAT platform, FastTest, and developed a number of CAT assessments on that system. Dr. Thompson received a Ph.D. in Psychometric Methods from the University of Minnesota, with a minor in Industrial/Organizational Psychology.

Workshop 4: Building and Delivering Online CAT Using Open-Source Concerto Platform

Presenter: Michal Kosinski, Stanford University

Level: Beginner

Pre-requisite: Pre-requisites:  Basic understanding of Item Response Theory is assumed. If you are new to R, we strongly recommend reading and trying the examples in the first 10 chapters of an official introduction to R (http://cran.r-project.org/doc/manuals/R-intro.html). It is not a prerequisitebut it takes only 2 hours or so, and you will gain an extremely useful skill, useful well beyond developing online CAT.

Bring your laptops:

Laptops will not be supplied, so please bring your own and make sure that the conference’s Internet connection is properly configured. BEFORE the workshop please download and install R (http://cran.rstudio.com/) and R studio (http://www.rstudio.com/). We will be available 15 minutes before the workshop to help you with that if necessary.

Abstract:

During this hands-on workshop participants will learn how to build and deliver an online Computerized Adaptive Test using Concerto v4, an open-source R-based adaptive testing platform. We will start with an introduction to Concerto, build HTML-based item templates, import item content and parameters and combine it all into a fully-functional online test.  

Bio: Michal Kosinski, one of Concerto’s authors, is the Deputy Director of The Psychometrics Centre at the University of Cambridge and the Leader of the e-Psychometrics Unit. He is additionally a Research Consultant at Microsoft® Research. He combines a solid psychological background with extensive skills in the areas of Machine Learning, data mining, and programming. His current research focuses on the digital environment and encompasses the relationship between digital footprint and psychological traits, crowd-sourcing platforms, auctioning platforms, and on-line psychometrics.