Various studies have shown that gratitude is essential to increase the happiness and quality of life of every individual. Unfortunately, research on gratitude still received little attention, and there is no standardized measurement for it. Gratitude measurement scale was developed overseas, and has not adapted to the Indonesian culture context. Moreover, the scale development is generally performed with classical theory approach, which has some drawbacks. This research will develop a gratitude scale using polytomous Item Response Theory model (IRT) by applying the Partial Credit Model (PCM).

The pilot study results showed that the gratitude scale (with 44 items) is a reliable measure (α = 0.944) and valid (meet both convergent and discriminant validity requirements). The pilot study results also showed that the gratitude scale satisfies unidimensionality assumptions.

The test results using the PCM model showed that the gratitude scale had a fit model. Of 44 items, there was one item that does not fit, so it was eliminated. Second test results for the remaining 43 items showed that they fit the model, and all items were fit to measure gratitude. Analysis using Differential Item Functioning (DIF) showed four items have a response bias based on gender. Thus, there are 39 items remaining in the scale.

%B IACAT 2017 Conference %I Niigata Seiryo University %C Niigata, Japan %8 08/2017 %G eng %U https://drive.google.com/open?id=1pHhO4cq2-wh24ht3nBAoXNHv7234_mjH %0 Conference Paper %B Annual Conference of the International Association for Computerized Adaptive Testing %D 2011 %T Practitioner’s Approach to Identify Item Drift in CAT %A Huijuan Meng %A Susan Steinkamp %A Paul Jones %A Joy Matthews-Lopez %K CUSUM method %K G2 statistic %K IPA %K item drift %K item parameter drift %K Lord's chi-square statistic %K Raju's NCDIF %B Annual Conference of the International Association for Computerized Adaptive Testing %8 10/2011 %G eng %0 Journal Article %J Acta Psychologica Sinica %D 2006 %T The comparison among item selection strategies of CAT with multiple-choice items %A Hai-qi, D. %A De-zhi, C. %A Shuliang, D. %A Taiping, D. %K CAT %K computerized adaptive testing %K graded response model %K item selection strategies %K multiple choice items %X The initial purpose of comparing item selection strategies for CAT was to increase the efficiency of tests. As studies continued, however, it was found that increasing the efficiency of item bank using was also an important goal of comparing item selection strategies. These two goals often conflicted. The key solution was to find a strategy with which both goals could be accomplished. The item selection strategies for graded response model in this study included: the average of the difficulty orders matching with the ability; the medium of the difficulty orders matching with the ability; maximum information; A stratified (average); and A stratified (medium). The evaluation indexes used for comparison included: the bias of ability estimates for the true; the standard error of ability estimates; the average items which the examinees have administered; the standard deviation of the frequency of items selected; and sum of the indices weighted. Using the Monte Carlo simulation method, we obtained some data and computer iterated the data 20 times each under the conditions that the item difficulty parameters followed the normal distribution and even distribution. The results were as follows; The results indicated that no matter difficulty parameters followed the normal distribution or even distribution. Every type of item selection strategies designed in this research had its strong and weak points. In general evaluation, under the condition that items were stratified appropriately, A stratified (medium) (ASM) had the best effect. (PsycINFO Database Record (c) 2007 APA, all rights reserved) %B Acta Psychologica Sinica %I Science Press: China %V 38 %P 778-783 %@ 0439-755X (Print) %G eng %M 2006-20552-017 %0 Journal Article %J Medical Care %D 2006 %T Overview of quantitative measurement methods. Equivalence, invariance, and differential item functioning in health applications %A Teresi, J. A. %K *Cross-Cultural Comparison %K Data Interpretation, Statistical %K Factor Analysis, Statistical %K Guidelines as Topic %K Humans %K Models, Statistical %K Psychometrics/*methods %K Statistics as Topic/*methods %K Statistics, Nonparametric %X BACKGROUND: Reviewed in this article are issues relating to the study of invariance and differential item functioning (DIF). The aim of factor analyses and DIF, in the context of invariance testing, is the examination of group differences in item response conditional on an estimate of disability. Discussed are parameters and statistics that are not invariant and cannot be compared validly in crosscultural studies with varying distributions of disability in contrast to those that can be compared (if the model assumptions are met) because they are produced by models such as linear and nonlinear regression. OBJECTIVES: The purpose of this overview is to provide an integrated approach to the quantitative methods used in this special issue to examine measurement equivalence. The methods include classical test theory (CTT), factor analytic, and parametric and nonparametric approaches to DIF detection. Also included in the quantitative section is a discussion of item banking and computerized adaptive testing (CAT). METHODS: Factorial invariance and the articles discussing this topic are introduced. A brief overview of the DIF methods presented in the quantitative section of the special issue is provided together with a discussion of ways in which DIF analyses and examination of invariance using factor models may be complementary. CONCLUSIONS: Although factor analytic and DIF detection methods share features, they provide unique information and can be viewed as complementary in informing about measurement equivalence. %B Medical Care %7 2006/10/25 %V 44 %P S39-49 %8 Nov %@ 0025-7079 (Print)0025-7079 (Linking) %G eng %M 17060834 %0 Journal Article %J Anales de Psicología %D 2006 %T Técnicas para detectar patrones de respuesta atípicos [Aberrant patterns detection methods] %A Núñez, R. M. N. %A Pina, J. A. L. %K aberrant patterns detection %K Classical Test Theory %K generalizability theory %K Item Response %K Item Response Theory %K Mathematics %K methods %K person-fit %K Psychometrics %K psychometry %K Test Validity %K test validity analysis %K Theory %X La identificación de patrones de respuesta atípicos es de gran utilidad para la construcción de tests y de bancos de ítems con propiedades psicométricas así como para el análisis de validez de los mismos. En este trabajo de revisión se han recogido los más relevantes y novedosos métodos de ajuste de personas que se han elaborado dentro de cada uno de los principales ámbitos de trabajo de la Psicometría: el escalograma de Guttman, la Teoría Clásica de Tests (TCT), la Teoría de la Generalizabilidad (TG), la Teoría de Respuesta al Ítem (TRI), los Modelos de Respuesta al Ítem No Paramétricos (MRINP), los Modelos de Clase Latente de Orden Restringido (MCL-OR) y el Análisis de Estructura de Covarianzas (AEC).Aberrant patterns detection has a great usefulness in order to make tests and item banks with psychometric characteristics and validity analysis of tests and items. The most relevant and newest person-fit methods have been reviewed. All of them have been made in each one of main areas of Psychometry: Guttman's scalogram, Classical Test Theory (CTT), Generalizability Theory (GT), Item Response Theory (IRT), Non-parametric Response Models (NPRM), Order-Restricted Latent Class Models (OR-LCM) and Covariance Structure Analysis (CSA). %B Anales de Psicología %V 22 %P 143-154 %@ 0212-9728 %G Spanish %M 2006-07751-018 %0 Journal Article %J Developmental Medicine and Child Neuropsychology %D 2005 %T A computer adaptive testing approach for assessing physical functioning in children and adolescents %A Haley, S. M. %A Ni, P. %A Fragala-Pinkham, M. A. %A Skrinar, A. M. %A Corzo, D. %K *Computer Systems %K Activities of Daily Living %K Adolescent %K Age Factors %K Child %K Child Development/*physiology %K Child, Preschool %K Computer Simulation %K Confidence Intervals %K Demography %K Female %K Glycogen Storage Disease Type II/physiopathology %K Health Status Indicators %K Humans %K Infant %K Infant, Newborn %K Male %K Motor Activity/*physiology %K Outcome Assessment (Health Care)/*methods %K Reproducibility of Results %K Self Care %K Sensitivity and Specificity %X The purpose of this article is to demonstrate: (1) the accuracy and (2) the reduction in amount of time and effort in assessing physical functioning (self-care and mobility domains) of children and adolescents using computer-adaptive testing (CAT). A CAT algorithm selects questions directly tailored to the child's ability level, based on previous responses. Using a CAT algorithm, a simulation study was used to determine the number of items necessary to approximate the score of a full-length assessment. We built simulated CAT (5-, 10-, 15-, and 20-item versions) for self-care and mobility domains and tested their accuracy in a normative sample (n=373; 190 males, 183 females; mean age 6y 11mo [SD 4y 2m], range 4mo to 14y 11mo) and a sample of children and adolescents with Pompe disease (n=26; 21 males, 5 females; mean age 6y 1mo [SD 3y 10mo], range 5mo to 14y 10mo). Results indicated that comparable score estimates (based on computer simulations) to the full-length tests can be achieved in a 20-item CAT version for all age ranges and for normative and clinical samples. No more than 13 to 16% of the items in the full-length tests were needed for any one administration. These results support further consideration of using CAT programs for accurate and efficient clinical assessments of physical functioning. %B Developmental Medicine and Child Neuropsychology %7 2005/02/15 %V 47 %P 113-120 %8 Feb %@ 0012-1622 (Print) %G eng %M 15707234 %0 Journal Article %J Acta Psychologica Sinica %D 2005 %T [Item characteristic curve equating under graded response models in IRT] %A Jun, Z. %A Dongming, O. %A Shuyuan, X. %A Haiqi, D. %A Shuqing, Q. %K graded response models %K item characteristic curve %K Item Response Theory %X In one of the largest qualificatory tests--economist test, to guarantee the comparability among different years, construct item bank and prepare for computerized adaptive testing, item characteristic curve equating and anchor test equating design under graded models in IRT are used, which have realized the item and ability parameter equating of test data in five years and succeeded in establishing an item bank. Based on it, cut scores of different years are compared by equating and provide demonstrational gist to constitute the eligibility standard of economist test. %B Acta Psychologica Sinica %I Science Press: China %V 37 %P 832-838 %@ 0439-755X (Print) %G eng %M 2005-16031-017 %0 Journal Article %J Applied Psychological Measurement %D 2004 %T Strategies for controlling item exposure in computerized adaptive testing with the generalized partial credit model %A Davis, L. L. %K computerized adaptive testing %K generalized partial credit model %K item exposure %X Choosing a strategy for controlling item exposure has become an integral part of test development for computerized adaptive testing (CAT). This study investigated the performance of six procedures for controlling item exposure in a series of simulated CATs under the generalized partial credit model. In addition to a no-exposure control baseline condition, the randomesque, modified-within-.10-logits, Sympson-Hetter, conditional Sympson-Hetter, a-stratified with multiple-stratification, and enhanced a-stratified with multiple-stratification procedures were implemented to control exposure rates. Two variations of the randomesque and modified-within-.10-logits procedures were examined, which varied the size of the item group from which the next item to be administered was randomly selected. The results indicate that although the conditional Sympson-Hetter provides somewhat lower maximum exposure rates, the randomesque and modified-within-.10-logits procedures with the six-item group variation have great utility for controlling overlap rates and increasing pool utilization and should be given further consideration. (PsycINFO Database Record (c) 2007 APA, all rights reserved) %B Applied Psychological Measurement %I Sage Publications: US %V 28 %P 165-185 %@ 0146-6216 (Print) %G eng %M 2004-13800-002 %0 Journal Article %J Annals of Internal Medicine %D 2003 %T Ten recommendations for advancing patient-centered outcomes measurement for older persons %A McHorney, C. A. %K *Health Status Indicators %K Aged %K Geriatric Assessment/*methods %K Humans %K Patient-Centered Care/*methods %K Research Support, U.S. Gov't, Non-P.H.S. %X The past 50 years have seen great progress in the measurement of patient-based outcomes for older populations. Most of the measures now used were created under the umbrella of a set of assumptions and procedures known as classical test theory. A recent alternative for health status assessment is item response theory. Item response theory is superior to classical test theory because it can eliminate test dependency and achieve more precise measurement through computerized adaptive testing. Computerized adaptive testing reduces test administration times and allows varied and precise estimates of ability. Several key challenges must be met before computerized adaptive testing becomes a productive reality. I discuss these challenges for the health assessment of older persons in the form of 10 "Ds": things we need to deliberate, debate, decide, and do. %B Annals of Internal Medicine %V 139 %P 403-409 %8 Sep 2 %G eng %M 12965966 %0 Journal Article %J Journal of Educational Measurement %D 2002 %T Outlier detection in high-stakes certification testing %A Meijer, R. R. %K Adaptive Testing %K computerized adaptive testing %K Educational Measurement %K Goodness of Fit %K Item Analysis (Statistical) %K Item Response Theory %K person Fit %K Statistical Estimation %K Statistical Power %K Test Scores %X Discusses recent developments of person-fit analysis in computerized adaptive testing (CAT). Methods from statistical process control are presented that have been proposed to classify an item score pattern as fitting or misfitting the underlying item response theory model in CAT Most person-fit research in CAT is restricted to simulated data. In this study, empirical data from a certification test were used. Alternatives are discussed to generate norms so that bounds can be determined to classify an item score pattern as fitting or misfitting. Using bounds determined from a sample of a high-stakes certification test, the empirical analysis showed that different types of misfit can be distinguished Further applications using statistical process control methods to detect misfitting item score patterns are discussed. (PsycINFO Database Record (c) 2005 APA ) %B Journal of Educational Measurement %V 39 %P 219-233 %G eng