EBookClubs

Read Books & Download eBooks Full Online

EBookClubs

Read Books & Download eBooks Full Online

Book Variable length Computerized Adaptive Testing  Adaptation of the A stratified Strategy in Item Selection with Content Balancing

Download or read book Variable length Computerized Adaptive Testing Adaptation of the A stratified Strategy in Item Selection with Content Balancing written by Yan Huo and published by . This book was released on 2010 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: Variable-length computerized adaptive testing (CAT) can provide examinees with tailored test lengths. With the fixed standard error of measurement (SEM) termination rule, variable-length CAT can achieve predetermined measurement precision by using relatively shorter tests compared to fixed-length CAT. To explore the application of variable-length CAT, this dissertation proposes four variable-length item selection methods adapted from the a-stratified strategy (Chang & Ying, 1999). These methods are named 1) the circularly increasing a-stratified method (STR-Ca), 2) the circularly decreasing a-stratified method (STR-Cd), 3) the random a-stratified method (STR-R), and 4) the two-stage a-stratified variable-length method (STR+R). The general strategy of these four methods allows test items to be selected in a mixed-strata ordering fashion from all strata partitioned by different levels of the discrimination parameter. This flexibility can overcome the potential problem of unbalanced item usage across different strata caused by previous attempts of applying the original a-stratified method into variable-length CAT. Study 1 examines the STR-Ca, the STR-Cd, and the STR-R methods in fixed-length CAT situations and the results show that their performance is comparable to that of the original a-stratified method in the fixed-length simulations in terms of various criterion measures such as Bias, MSE, efficiency, and item exposure rates. Study 2 explores these four item selection methods under the variable-length situations and the results indicate that these four methods can achieve good ability estimation while maintaining balanced item usage in the variable-length CAT simulations. To extend the implementation of these four variable-length item selection methods into a more realistic testing situation with content balancing constraints, Study 3 proposes two two-phase content balancing control methods, the variable-length modified multinomial model (MMM) method and the content weighted item selection index method. They can be naturally incorporated with these four adapted a-stratified methods to realize variable-length CAT with content control. Lastly, the intent of Study 4 is to explore decision making tools regarding choices among several variable-length CAT designs. Two quantitative indices, the cost-effective ratio and the variable-fixed-fitness index, are developed and their applications are demonstrated with some hypothetical examples. Together, these study findings will advance the research and understanding of variable-length CAT, and will facilitate the application and adoption of variable-length CAT in real world testing.

Book Computerized Adaptive Testing

Download or read book Computerized Adaptive Testing written by David J. Weiss and published by Guilford Publications. This book was released on 2024-04-29 with total page 388 pages. Available in PDF, EPUB and Kindle. Book excerpt: Used worldwide in assessment and professional certification contexts, computerized adaptive testing (CAT) offers a powerful means to measure individual differences or make classifications. This authoritative work from CAT pioneer David J. Weiss and Alper Şahin provides a complete how-to guide for planning and implementing an effective CAT to create a test unique to each person in real time. The book reviews the history of CAT and the basics of item response theory used in CAT. It walks the reader through developing an item bank, pretesting and linking items, selecting required CAT options, and using simulations to design a CAT. Available software for CAT delivery is described, including links to free and commercial options. Engaging multidisciplinary examples illustrate applications of CAT for measuring ability, achievement, proficiency, personality, attitudes, perceptions, patients’ reports of their symptoms, and academic or clinical progress.

Book Extension of the Item Pocket Method Allowing for Response Review and Revision to a Computerized Adaptive Test Using the Generalized Partial Credit Model

Download or read book Extension of the Item Pocket Method Allowing for Response Review and Revision to a Computerized Adaptive Test Using the Generalized Partial Credit Model written by Mishan G. B. Jensen and published by . This book was released on 2017 with total page 346 pages. Available in PDF, EPUB and Kindle. Book excerpt: Computerized Adaptive Testing (CAT) has increased in the last few decades, due in part to the increased use and availability of personal computers, but also partly due to the benefits of CATs. CATs provide increased measurement precision of ability estimates while decreasing the demand on examinees with shorter tests. This is accomplished by tailoring the test to each examinee and selecting items that are not too difficult or too easy based on the examinees’ interim ability estimate and responses to previous items. These benefits come at the cost of the flexibility to move through the test as an examinee would with a Paper and Pencil (P & P) test. The algorithms used in CATs for item selection and ability estimation require restrictions to response review and revision; however, a large portion of examinees desire options for review and revision of responses (Vispoel, Clough, Bleiler, Hendrickson, and Ihrig, 2002). Previous research has examined response review and revision in CATs with limited review and revision options and are limited to after all items had been administered. The development of the Item Pocket (IP) method (Han, 2013) has allowed for response review and revision during the test, relaxing the restrictions, while maintaining an acceptable level of measurement precision. This is achieved by creating an item pocket in which items are placed, which are excluded from use in the interim ability estimation and the item selection procedures. The initial simulation study was conducted by Han (2013) who investigated the use of the IP method using a dichotomously-scored fixed length test. The findings indicated that the IP method does not substantially decrease measurement precision and bias in the ability estimates were within acceptable ranges for operational tests. This simulation study extended the IP method to a CAT using polytomously-scored items using the Generalized Partial Credit model with exposure control and content balancing. The IP method was implemented in tests with three IP sizes (2, 3, and 4), two termination criteria (fixed and variable), two test lengths (15 and 20), and two item completion conditions (forced to answer and ignored) for items remaining in the IP at the end of the test. Additionally, four traditional CAT conditions, without implementing the IP method, were included in the design. Results found that the longer, 20 item IP method conditions using the forced answer method had higher measurement precision, with higher mean correlations between known and estimated theta, lower mean bias and RMSE, and measurement precision increased as IP size increased. The two item completion conditions (forced to answer and ignored) resulted in similar measurement precision. The variable length IP conditions resulted in comparable measurement precision as the corresponding fixed length IP conditions. The implications of the findings and the limitations with suggestions for future research are also discussed.

Book Computerized Adaptive Testing

Download or read book Computerized Adaptive Testing written by Howard Wainer and published by Routledge. This book was released on 2000-04-01 with total page 360 pages. Available in PDF, EPUB and Kindle. Book excerpt: This celebrated primer presents an introduction to all of the key ingredients in understanding computerized adaptive testing technology, test development, statistics, and mental test theory. Based on years of research, this accessible book educates the novice and serves as a compendium of state-of-the-art information for professionals interested in computerized testing in the areas of education, psychology, and other related social sciences. A hypothetical test taken as a prelude to employment is used as a common example throughout to highlight this book's most important features and problems. Changes in the new edition include: *a completely rewritten chapter 2 on the system considerations needed for modern computerized adaptive testing; *a revised chapter 4 to include the latest in methodology surrounding online calibration and in the modeling of testlets; and *a new chapter 10 with helpful information on how test items are really selected, usage patterns, how usage patterns influence the number of new items required, and tools for managing item pools.

Book Computerized Adaptive Testing  Theory and Practice

Download or read book Computerized Adaptive Testing Theory and Practice written by Wim J. van der Linden and published by Springer Science & Business Media. This book was released on 2000-07-31 with total page 327 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book offers a comprehensive introduction to the latest developments in the theory and practice of CAT. It can be used both as a basic reference and a valuable resource on test theory. It covers such topics as item selection and ability estimation, item pool development and maintenance, item calibration and model fit, and testlet-based adaptive testing, as well as the operational aspects of existing large-scale CAT programs.

Book An Investigation of Stratified and Maximum Information Item Selection Procedures in Computerized Adaptive Testing

Download or read book An Investigation of Stratified and Maximum Information Item Selection Procedures in Computerized Adaptive Testing written by Hui Deng and published by . This book was released on 2002 with total page 310 pages. Available in PDF, EPUB and Kindle. Book excerpt:

Book Adaptive Inventories

Download or read book Adaptive Inventories written by Jacob M. Montgomery and published by Cambridge University Press. This book was released on 2022-07-28 with total page 151 pages. Available in PDF, EPUB and Kindle. Book excerpt: The goal of this Element is to provide a detailed introduction to adaptive inventories, an approach to making surveys adjust to respondents' answers dynamically. This method can help survey researchers measure important latent traits or attitudes accurately while minimizing the number of questions respondents must answer. The Element provides both a theoretical overview of the method and a suite of tools and tricks for integrating it into the normal survey process. It also provides practical advice and direction on how to calibrate, evaluate, and field adaptive batteries using example batteries that measure variety of latent traits of interest to survey researchers across the social sciences.

Book Using Response time Constraints in Item Selection to Control for Differential Speededness in Computerized Adaptive Testing

Download or read book Using Response time Constraints in Item Selection to Control for Differential Speededness in Computerized Adaptive Testing written by Wim J. van der Linden and published by . This book was released on 2003 with total page 20 pages. Available in PDF, EPUB and Kindle. Book excerpt:

Book New Horizon Testing

Download or read book New Horizon Testing written by David J. Weiss and published by Academic Press. This book was released on 1983-10-28 with total page 374 pages. Available in PDF, EPUB and Kindle. Book excerpt: New Horizons in Testing: Latent Trait Test Theory and Computerized Adaptive Testing provides an in-depth analysis of psychological measurement, espoused by the computer-latent trait test theory (item response theory) and computerized adaptive testing. The book is organized into five parts. The first part addresses basic problems in estimating the parameters of the item response theory models that constitute a class of latent trait test theory models. The second part, discusses the implications of item response theory for measuring individuals using more than just simply a trait level (e.g., ability) score. Part III describes the application of item response theory models to specific applied problems, including the problem of equating tests or linking items into a pool, a latent trait model for timed tests, and the problem of measuring growth using scores derived from the application of item response theory models. Part IV is concerned with the application of item response theory to computerized adaptive testing. Finally, Part V includes discussion of two special models beyond the standard models used in the rest of the book. One of these models, the constant information model, is a simplification of the general latent trait models, whereas the other is an extension of latent trait models to the problem of measuring change. Psychometricians, psychologists, and psychiatrists will find the book useful.

Book Statistical Aspects Of Computerized Adaptive Testing

Download or read book Statistical Aspects Of Computerized Adaptive Testing written by Haskell Sie and published by . This book was released on 2014 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: In the past several decades, Computerized Adaptive Testing (CAT) has received much attention in educational and psychological research due to the efficiency in achieving the goal of assessment, whether it is to estimate the latent trait of test takers with high precision or to accurately classify them into one of several latent classes. In the latter case, the adaptive nature of CAT is used in educational testing to make inferences about the location of examinees' latent ability relative to one or more pre-specified cut-off points along the ability continuum. When there is only one cut-off point and two proficiency groups, this type of CAT is commonly referred to as Adaptive Mastery Testing (AMT). A well-known approach in AMT is to combine the Sequential Probability Ratio Test (SPRT) stopping rule with item selection to maximize Fisher information at the mastery threshold. In the first part of this dissertation, a new approach is proposed in which a time limit is defined for the test and examinees' response times are considered in both item selection and test termination. Item selection is performed by maximizing Fisher information per time unit, rather than Fisher information itself. The test is terminated once the SPRT makes a classification decision, the time limit is exceeded, or there is no remaining item that has a high enough probability of being answered before the time limit. In a simulation study, the new procedure showed a substantial reduction in average testing time while slightly improving classification accuracy compared to the original method. In addition, the new procedure reduced the percentage of examinees who exceeded the time limit. Another well-known stopping rule in AMT is to terminate the assessment once the examinee's two-sided ability confidence interval lies entirely above or below the cut score. The second part of this dissertation proposes new procedures that seek to improve such a variable-length stopping rule by coupling it with curtailment and stochastic curtailment. Under the new procedures, test termination can occur earlier if the probability is high enough that the current classification decision remains the same should the test continue. Computation of this probability utilizes normality of an asymptotically equivalent version of the maximum likelihood estimate (MLE) of ability. In two simulation studies, the new procedures showed a substantial reduction in average test length (ATL) while maintaining similar classification accuracy to the original stopping rule based on the ability confidence interval. In the last part of this dissertation, generalization to multidimensional CAT (MCAT) is examined. Research has shown that MCAT improves the precision of both subscores and overall scores compared to its unidimensional counterpart. Several studies have investigated the performance of MCAT in recovering examinees' multiple abilities depending on the item selection methods. None of these studies, however, considered an item pool containing a mixture of multiple-choice (MC) and constructed-response (CR) items. With many assessments currently containing such a mixture of item types that measure more than one trait, there is an obvious need to understand how different item selection methods choose different types of items depending on their dimensional loadings (simple-structure versus complex-structured) and location of maximum information. In a simulation study, performance of five MCAT item selection methods were compared using an item pool consisting of a mixture between MC and CR items for mixed-format assessments. Ability recovery as well as item preferences of each method (simple- versus complex-structured items and location of maximum information) were examined.

Book A Comparison of Item selection Methods for Adaptive Tests with Content Constraints

Download or read book A Comparison of Item selection Methods for Adaptive Tests with Content Constraints written by Wim J. van der Linden and published by . This book was released on 2005 with total page 32 pages. Available in PDF, EPUB and Kindle. Book excerpt: "In adaptive testing, items are selected for an individual test taker with the goal of administering a test that is, as closely as possible, tailored to the ability level of that test taker. The selection is sequential in that one item is selected at a time. At the same time, adaptive tests typically have to meet a large number of content constraints, and this requirement is solved more naturally by simultaneous item selection. In this project, the three main item-selection methods in adaptive testing for solving this dilemma were investigated: (1) the spiraling method (SM), which moves across content categories of items in the item pool in a manner that is proportional to the numbers of items needed from them during item selection, (2) the weighted-deviations method (WDM), which selects the items using a projection of a weighted sum of the attributes of the entire test, and (3) the shadow test approach (STA), which selects the items based on a projection of the actual items in the entire test. An empirical comparison among the methods was conducted for an adaptive version of the Law School Admission Test (LSAT)."--Publisher website.

Book Optimal Stratification of Item Pools in  alpha  stratified Computerized Adaptive Testing

Download or read book Optimal Stratification of Item Pools in alpha stratified Computerized Adaptive Testing written by Willem Jan Linden and published by . This book was released on 2000 with total page 40 pages. Available in PDF, EPUB and Kindle. Book excerpt:

Book Annual Meeting Program

Download or read book Annual Meeting Program written by American Educational Research Association and published by . This book was released on 2000 with total page 324 pages. Available in PDF, EPUB and Kindle. Book excerpt:

Book Comparing Item Selection Methods in Computerized Adaptive Testing Using the Rating Scale Model

Download or read book Comparing Item Selection Methods in Computerized Adaptive Testing Using the Rating Scale Model written by Meredith Sibley Butterfield and published by . This book was released on 2016 with total page 346 pages. Available in PDF, EPUB and Kindle. Book excerpt: Computer Adaptive Testing (CAT), a form of computer-based testing that selects and administers items that match the examinee’s trait levels, can be shorter in length and maintain comparable or greater measurement precision than traditional fixed-length paper-and-pencil testing. Administration of computer-based patient reported outcome (PRO) measures has increased recently in the medical field. Because PRO measures often have small item pools, small numbers of items administered, and populations in poor health, the benefits of CATs are especially advantageous. In CAT, Maximum Fisher information (MFI) is the most commonly used item selection procedure since it is easy to use and computationally simple. However, its main drawback is the attenuation paradox. If the estimated trait level of the examinee is not the true trait level, the items selected will not maximize information at the true trait level and the measurement is less precise. To address this issue, alternative item selections methods have been proposed. In studies, these alternatives have not performed better than MFI. Recently, Gradual Maximum Information Ratio (GMIR) item selection method was proposed and previous findings suggest GMIR could be beneficial for a short CAT. This simulation study compared GMIR and MFI item selection methods under conditions specific to the constraints of the PRO measures. GMIR and MFI are compared under Andrich’s Rating Scale Model (ARSM) across two polytomous item pool sizes (41 and 82), two population latent trait distributions (normal and negatively skewed), and three combination maximum number of item and minimum standard error stopping rules (5/0.54, 7/0.46, 9/0.40). The conditions were fully crossed. Performance was evaluated in terms of descriptive statistics of the final trait estimates, measurement precision, conditional measurement precision, and administration efficiency. Results found GMIR had better measurement precision when the test length was 5 items, with higher mean correlations between known and estimated trait levels, smaller mean bias, and smaller mean RMSE. An effect of item pool size and population latent trait distribution was not found. Across item selection methods, measurement precision increased as the test length increase, but with diminishing returns from 7 to 9 items.