Can the quality of published academic journal articles be assessed with machine learning?
AbstractFormal assessments of the quality of the research produced by departments and universities are now conducted by many countries to monitor achievements and allocate performancerelated funding. These evaluations are hugely time consuming if conducted by postpublication peer review and are simplistic if based on citations or journal impact factors. This article investigates whether machine learning could help reduce the burden of peer review by using citations and metadata to learn how to score articles from a sample assessed by peer review. An experiment is used to underpin the discussion, attempting to predict journal citation thirds, as a proxy for article quality scores, for all Scopus narrow fields from 2014 to 2020. The results show that these proxy quality thirds can be predicted with above baseline accuracy in all 326 narrow fields, with Gradient Boosting Classifier, Random Forest Classifier, or Multinomial Naïve Bayes being the most accurate in nearly all cases. Nevertheless, the results partly leverage journal writing styles and topics, which are unwanted for some practical applications and cause substantial shifts in average scores between countries and between institutions within a country. There may be scope for predicting articles scores when the predictions have the highest probability.
CitationThelwall, M. (2022) Can the quality of published academic journal articles be assessed with machine learning? Quantitative Science Studies, 3 (1), pp. 208–226.
JournalQuantitative Science Studies
Description© 2022 The Author. Published by MIT Press. This is an open access article available under a Creative Commons licence. The published version can be accessed at the following link on the publisher’s website: https://doi.org/10.1162/qss_a_00185
Except where otherwise noted, this item's license is described as https://creativecommons.org/licenses/by/4.0/