Using natural language processing to predict item response times and improve test construction
Average rating
Cast your vote
You can rate an item by clicking the amount of stars they wish to award to this item.
When enough users have cast their vote on this item, the average rating will also be shown.
Star rating
Your vote was cast
Thank you for your feedback
Thank you for your feedback
Issue Date
2020-02-24
Metadata
Show full item recordAbstract
In this article, it is shown how item text can be represented by (a) 113 features quantifying the text's linguistic characteristics, (b) 16 measures of the extent to which an information‐retrieval‐based automatic question‐answering system finds an item challenging, and (c) through dense word representations (word embeddings). Using a random forests algorithm, these data then are used to train a prediction model for item response times and predicted response times then are used to assemble test forms. Using empirical data from the United States Medical Licensing Examination, we show that timing demands are more consistent across these specially assembled forms than across forms comprising randomly‐selected items. Because an exam's timing conditions affect examinee performance, this result has implications for exam fairness whenever examinees are compared with each other or against a common standard.Citation
Baldwin, P., Yaneva, V., Mee, J., Clauser, B. E., and Ha, L. A. (2020) Using natural language processing to predict item response times and improve test construction, Journal of Educational Measurement, https://doi.org/10.1111/jedm.12264Publisher
WileyJournal
Journal of Educational MeasurementAdditional Links
https://onlinelibrary.wiley.com/doi/full/10.1111/jedm.12264Type
Journal articleLanguage
enISSN
0022-0655EISSN
1745-3984ae974a485f413a2113503eed53cd6c53
10.1111/jedm.12264
Scopus Count
Except where otherwise noted, this item's license is described as https://creativecommons.org/licenses/by-nc-nd/4.0/