Does the perceived quality of interdisciplinary research vary between fields?
Authors
Thelwall, Mike
Kousha, Kayvan

Stuart, Emma

Makita, Meiko

Abdoli, Mahshid
Wilson, Paul

Levitt, Jonathan
Issue Date
2023-04-27
Metadata
Show full item recordAbstract
Purpose: To assess whether interdisciplinary research evaluation scores vary between fields. Design/methodology/approach: We investigate whether published refereed journal articles were scored differently by expert assessors (two per output, agreeing a score and norm referencing) from multiple subject-based Units of Assessment (UoAs) in the REF2021 UK national research assessment exercise. The primary raw data was 8,015 journal articles published 2014-20 and evaluated by multiple UoAs, and the agreement rates were compared to the estimated agreement rates for articles multiply-evaluated within a single UoA. Findings: We estimated a 53% agreement rate on a four-point quality scale between UoAs for the same article and a within-UoA agreement rate of 70%. This suggests that quality scores vary more between fields than within fields for interdisciplinary research. There were also some hierarchies between fields, in the sense of UoAs that tended to give higher scores for the same article than others. Research limitations/implications: The results apply to one country and type of research evaluation. The agreement rate percentage estimates are both based on untested assumptions about the extent of cross-checking scores for the same articles in the REF, so the inferences about the agreement rates are tenuous. Practical implications: The results underline the importance of choosing relevant fields for any type of research evaluation. Originality: This is the first evaluation of the extent to which a careful peer review exercise generates different scores for the same articles between disciplines.Citation
Thelwall, M., Kousha, K., Stuart, E., Makita, M., Abdoli, M., Wilson, P. and Levitt, J. (2023) Does the perceived quality of interdisciplinary research vary between fields? Journal of Documentation, Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/JD-01-2023-0012Publisher
EmeraldJournal
Journal of DocumentationAdditional Links
https://www.emerald.com/insight/publication/issn/0022-0418Type
Journal articleLanguage
enDescription
This is an accepted manuscript of an article published in Journal of Documentation by Emerald on 27/04/2023, available online: 10.1108/JD-01-2023-0012 The accepted version of the publication may differ from the final published version.ISSN
0022-0418Sponsors
This study was funded by Research England, Scottish Funding Council, Higher Education Funding Council for Wales, and Department for the Economy, Northern Ireland as part of the Future Research Assessment Programme (https://www.jisc.ac.uk/future-research-assessment-programme).ae974a485f413a2113503eed53cd6c53
10.1108/JD-01-2023-0012
Scopus Count
Collections
Except where otherwise noted, this item's license is described as https://creativecommons.org/licenses/by-nc/4.0/
Related items
Showing items related by title, author, creator and subject.
-
Book of Abstracts: 2nd Faculty of Science and Engineering (FSE) Research Conference. Theme: Festival of Research (FoR) and Research during the COVID-19 PandemicSuresh, Subashini; Aggoun, Amar; Burnham, Keith (University of Wolverhampton, 2021-03-26)
-
Is big team research fair in national research assessments? The case of the UK Research Excellence Framework 2021Thelwall, Mike; Kousha, Kayvan; Makita, Meiko; Abdoli, Mahshid; Stuart, Emma; Wilson, Paul; Levitt, Jonathan (Sciendo/National Science Library of the Chinese Academy of Sciences, 2023-02-28)Collaborative research causes problems for research assessments because of the difficulty in fairly crediting its authors. Whilst splitting the rewards for an article amongst its authors has the greatest surface-level fairness, many important evaluations assign full credit to each author, irrespective of team size. The underlying rationales for this are labour reduction and the need to incentivise collaborative work because it is necessary to solve many important societal problems. This article assesses whether full counting changes results compared to fractional counting in the case of the UK’s Research Excellence Framework (REF) 2021. For this assessment, fractional counting reduces the number of journal articles to as little as 10% of the full counting value, depending on the Unit of Assessment (UoA). Despite this large difference, allocating an overall grade point average (GPA) based on full counting or fractional counting give results with a median Pearson correlation within UoAs of 0.98. The largest changes are for Archaeology (r=0.84) and Physics (r=0.88). There is a weak tendency for higher scoring institutions to lose from fractional counting, with the loss being statistically significant in 5 of the 34 UoAs. Thus, whilst the apparent over-weighting of contributions to collaboratively authored outputs does not seem too problematic from a fairness perspective overall, it may be worth examining in the few UoAs in which it makes the most difference.
-
What is the optimal number of researchers for social science research?Levitt, Jonathan M. (Springer, 2014-10-19)Many studies have found that co-authored research is more highly cited than single author research. This finding is policy relevant as it indicates that encouraging co-authored research will tend to maximise citation impact. Nevertheless, whilst the citation impact of research increase as the number of authors increases in the sciences, the extent to which this occurs in the social sciences is unknown. In response, this study investigates the average citation level of articles with one to four authors published in 1995, 1998, 2001, 2004 and 2007 in 19 social science disciplines. The results suggest that whilst having at least two authors gives a substantial citation impact advantage in all social science disciplines, additional authors are beneficial in some disciplines but not in others.