Evaluation of editors' abilities to predict the citation potential of research manuscripts submitted to The BMJ: a cohort study

BMJ. 2022 Dec 14:379:e073880. doi: 10.1136/bmj-2022-073880.

Abstract

Objective: To evaluate the ability of The BMJ editors to predict the number of times submitted research manuscripts will be cited.

Design: Cohort study.

Setting: Manuscripts submitted to The BMJ, reviewed, and subsequently scheduled for discussion at a prepublication meeting between 27 August 2015 and 29 December 2016.

Participants: 10 BMJ research team editors.

Main outcome measures: Reviewed manuscripts were rated independently by attending editors for citation potential in the year of first publication plus the next year: no citations, below average (<10 citations), average (10-17 citations), or high (>17 citations). Predicted citations were subsequently compared with actual citations extracted from Web of Science (WOS).

Results: Of 534 manuscripts reviewed, 505 were published as full length articles (219 in The BMJ) by end of 2019 and indexed in WOS, 22 were unpublished, and one abstract was withdrawn. Among the 505 manuscripts, the median (IQR [range]) number of citations in the year of publication plus the following year was 9 (4-17 [0-150]); 277 (55%) manuscripts were cited <10 times, 105 (21%) were cited 10-17 times, and 123 (24%) cited >17 times. Manuscripts accepted by The BMJ were cited more highly (median 12 (IQR 7-24) citations) than those rejected (median 7 (3-12) citations). For all 10 editors, predicted ratings tended to increase in line with actual citations, but with considerable variation within categories; nine failed to identify the correct citation category for >50% (range 31%-52%) of manuscripts, and κ ranged between 0.01 to 0.19 for agreement between predicted and actual categories. Editors more often rated papers that achieved high actual citation counts as having low citation potential than the reverse. Collectively, the mean percentage of editors predicting the correct citation category was 43%, and for 160 (32%) manuscripts at least 50% of editors predicted the right category.

Conclusions: Editors weren't good at estimating the citation potential of manuscripts individually or as a group; there is no wisdom of the crowd when it comes to BMJ editors.

MeSH terms

  • Cohort Studies*
  • Humans