skip to content

Research Strategy Office



This section lists project to which Research on Research team has contributed. The views expressed in the articles listed on this page are those of the authors of the articles and do not represented the views or position of the Research Strategy Office, or the University of Cambridge.

From COVID-19 research to vaccine application: why might it take 17 months not 17 years and what are the wider lessons? [ ]

It has been estimated that it takes about 17 years for medical research to move from the laboratory to become a treatment available to patients. This commentary explores how it might be possible for a Covid-SARS-2 vaccine to be produced in a timescale closer to 17 months. Is it an apples-to-apples comparison and how is research to identify a Covid-SARS-2 vaccine being accelerated?

Article link (open access): [ ]

The commentary is also summarised in a recent blog on the Bennett Institute for Public Policy website [ ]

Heuristics, not plumage: a response to Osterloh and Frey’s Discussion Paper on ‘Borrowed Plumes’

Journal Impact Factor (JIF) is widely accepted as a bad measure of research quality, but its use persists. Various baroque explanations have been suggested for JIF’s persistence. This note suggests a simpler explanation: that it persists because JIF convenient heuristic, and that the key to reducing JIF’s use is to ensure alternative metrics are available that provide a better effort/accuracy trade off. The note also calls for better information on the extent of JIF usage so the urgency of the cause can be better judged.

Article link: [ ]

Repository link: [ ]

What do we know about grant peer review in the health sciences?

Peer review decisions award an estimated >95% of academic medical research funding. This paper summarises the available evidence on the effectiveness and burden of peer review for grant funding.

The paper identifies a remarkable paucity of evidence about the efficiency of peer review for funding and we identifies some conclusions around the effectiveness and burden of peer review. The strongest evidence around effectiveness indicates a bias against innovative research. There is also fairly clear evidence that peer review is, at best, a weak predictor of future research performance, and that ratings vary considerably between reviewers. There is some evidence of age bias and cronyism. Good evidence shows that the burden of peer review is high and that around 75% of it falls on applicants. By contrast, many of the efforts to reduce burden are focused on funders and reviewers/panel members.

Given the central role of peer review allocation in the biomedical research system the paper suggests the importance of acknowledging, assessing and analysing the uncertainty around peer review. This could include transparent experimentation and evaluation of different ways to fund research; but will require more openness across the wider scientific community to support such investigations, acknowledging the current lack of evidence and the impossibility of achieving perfection.

Article link (open access): []