American Journal of Educational Research
ISSN (Print): 2327-6126 ISSN (Online): 2327-6150 Website: http://www.sciepub.com/journal/education Editor-in-chief: Ratko Pavlović
Open Access
Journal Browser
Go
American Journal of Educational Research. 2019, 7(11), 865-871
DOI: 10.12691/education-7-11-17
Open AccessArticle

A Comparison between BMIRT and IRTPRO: A Simulation Study of a Multidimensional Item Response Model

Tingxuan Li1,

1Graduate School of Education, Shanghai Jiao Tong University, Shanghai, China

Pub. Date: November 25, 2019

Cite this paper:
Tingxuan Li. A Comparison between BMIRT and IRTPRO: A Simulation Study of a Multidimensional Item Response Model. American Journal of Educational Research. 2019; 7(11):865-871. doi: 10.12691/education-7-11-17

Abstract

The objective of this study is to provide comparative information on two software programs- IRTPRO version 2.1 for Windows and BMIRT. In educational measurement, software programs are being developed and updated rapidly. By using a small-scale simulation study on a two-parameter logistic model in multidimensional item response theory, this study is to examine the bias values and root mean square error values produced by both programs. Other than item parameter recovery, the comparisons about run time and user interface were also made. The results showed that BMIRT was better in estimating item slope parameters. However, in terms of run time, it is much slower than IRTPRO. In addition, IRTPRO’s interface is much more user friendly than BMIRT’s. Screenshots of conducting item calibrations for both programs are in Appendix A.

Keywords:
multidimensional item response theory educational measurement simulation

Creative CommonsThis work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/

References:

[1]  Muraki, E., & Bock, R. D. (2003). PARSCALE 4 for Windows: IRT based test scoring and item analysis for graded items and rating scales [Computer software]. Lincolnwood, IL: Scientific Software International, Inc.
 
[2]  Thissen, D., Chen, W-H, & Bock, R. D. (2003). MULTILOG 7 for Windows: Multiple category item analysis and test scoring using item response theory [Computer software]. Lincolnwood, IL: Scientific Software International, Inc.
 
[3]  Schwarz, R. (2015). A review of BMIRT TOOLKIT: BMIRT for Bayesian Multivariate IRT. Applied Psychological Measurement, 39(2), 155-159.
 
[4]  Paek I. & Han K. T. (2013). IRTPRO 2.1 for Windows (item response theory for patient-reported outcomes). Applied Psychological Measurement, 37(3), 242-252.
 
[5]  Cai, L., Thissen, D., & du Toit, S. H. C. (2011). IRTPRO for Windows [Computer software]. Lincolnwood, IL: Scientific Software International.
 
[6]  Bock, R. D., Aitkin, M. (1981). MML estimation of item parameters: Application of an EM algorithm. Psychometrika, 46(4), 443-459.
 
[7]  Yao, L. (2003). BMIRT: Bayesian Multivariate Item Response Theory [Computer software]. Monterey, CA: CTB/McGraw-Hill.
 
[8]  Wu, E. J., & Bentler, P. M. (2011). EQSIRT: A user-friendly IRT program [Computer software]. Encino, CA: Multivariate Software
 
[9]  Chen, W-H., & Thissen, D. (1997). Local dependence indices for item pairs using response theory. Journal of Educational and Behavioral Statistics, 22(3), 265-289.
 
[10]  Falk, C. F., & Monroe, S. (2018). On Lagrange multiplier tests in multidimensional item response theory: Information matrices and model misspecification. Educational and Psychological Measurement, 78(1), 653-678.
 
[11]  Luecht, R. M., & Miller, T. R. (1992). Unidimensional calibrations and interpretations of composite traits for multidimensional tests. Applied Psychological Measurement, 16(1), 279-293.
 
[12]  Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists. Mahwah, NJ: Erlbaum.
 
[13]  Lord, F. M. (1980). Applications of item response theory to practical testing problems. Hillsdale, New Jersey: Lawrence Erlbaum Associates.
 
[14]  Chalmers, R. P. (2016a). Generating adaptive and non-adaptive test interfaces for multidimensional item response theory applications. Journal of Statistical Software, 71(5), 1-38.
 
[15]  Lord, F. M., & Novick, M. R. (1968). Statistical theories of mental test scores. MA: Addison-Wesley.
 
[16]  Reckase, M. D. (2009). Multidimensional item response theory. New York: Springer.
 
[17]  Embretson, S. (1984). A general latent trait model for response processes. Psychometrika, 40(2), 175-186.
 
[18]  Hsu, C-L., & Wang, W-C (2018). Multidimensional computerized adaptive testing using non-compensatory item response theory models. Applied Psychological Measurement, 43(6), 464-480.
 
[19]  Park, J. Y., Cornillie, F., van der Maas, H., & Van Den Noortgate, W. (2019). A multidimensional IRT approach for dynamically monitoring ability growth in computerized practice environments. Frontiers in psychology, 10(1), 1-10.
 
[20]  Embretson, S. E. (2010). Measuring psychological constructs. Washington, DC: American Psychological Association.
 
[21]  Adams, R. J., Wilson, M., & Wang, W-C. (1997). The multidimensional random coefficients multinomial logit model. Applied Psychological Measurement, 21(1), 1-23.
 
[22]  Bolt, D. M., & Lall, V. F. (2003). Estimation of compensatory and noncompensatory multidimensional item response models using Markov chain Monte Carlo. Applied Psychological Measurement, 27(6), 395-414.
 
[23]  Ackerman, T. A., Gierl, M. J., & Walker, C. M. (2003). An NCME instructional module on using multidimensional item response theory to evaluate educational and psychological tests. Educational Measurement: Issues and Practice, 22(3), 37-53.
 
[24]  R Core Team (2013). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL http://www.R-project.org/.
 
[25]  Yavuz, G., & Hambleton, R. K. (2017). Comparative analyses of MIRT models and software (BMIRT and flexMIRT). Educational and Psychological Measurement, 77(2), 263-274.