Loading...
Thumbnail Image
Item

Journal quality factors from ChatGPT: more meaningful than impact factors?

Editors
Other contributors
Affiliation
Epub Date
Issue Date
2025-01-18
Submitted date
Alternative
Abstract
Purpose: Journal Impact Factors and other citation-based indicators are widely used and abused to help select journals to publish in or to estimate the value of a published article. Nevertheless, citation rates primarily reflect scholarly impact rather than other quality dimensions, including societal impact, originality, and rigour. In response to this deficit, Journal Quality Factors (JQFs) are defined and evaluated. These are average quality score estimates given to a journal’s articles by ChatGPT. Design/methodology/approach: JQFs were compared with Polish, Norwegian and Finnish journal ranks and with journal citation rates for 1,300 journals with 130,000 articles from 2021 in large monodisciplinary journals in the 25 out of 27 Scopus broad fields of research for which it was possible. Outliers were also examined. Findings: JQFs correlated positively and mostly strongly (median correlation: 0.641) with journal ranks in 24 out of the 25 broad fields examined, indicating a nearly science-wide ability for ChatGPT to estimate journal quality. Journal citation rates had similarly high correlations with national journal ranks, however, so JQFs are not a universally better indicator. An examination of journals with JQFs not matching their journal ranks suggested that abstract styles may affect the result, such as whether the societal contexts of research are mentioned. Research limitations: Different journal rankings may have given different findings because there is no agreed meaning for journal quality. Practical implications: The results suggest that JQFs are plausible as journal quality indicators in all fields and may be useful for the (few) research and evaluation contexts where journal quality is an acceptable proxy for article quality, and especially for fields like mathematics for which citations are not strong indicators of quality. Originality/value: This is the first attempt to estimate academic journal value with a Large Language Model.
Citation
Thelwall, M. and Kousha, K. (2025) Journal quality factors from ChatGPT: more meaningful than impact factors? Journal of Data and Information Science, 10 (2), pp. 106-123. https://doi.org/10.2478/jdis-2025-0016
Publisher
Research Unit
PubMed ID
PubMed Central ID
Embedded videos
Type
Journal article
Language
en
Description
© 2025 The Authors. Published by Sciendo. This is an open access article available under a Creative Commons licence. The published version can be accessed at the following link on the publisher’s website: https://doi.org/10.2478/jdis-2025-0016
Series/Report no.
ISSN
2096-157X
EISSN
2543-683X
ISBN
ISMN
Gov't Doc #
Sponsors
This study was funded by the UKRI Economic and Social Research Council.
Rights
Research Projects
Organizational Units
Journal Issue
Embedded videos