skip to content

Research Strategy Office

 

The University of Cambridge supports a diverse research culture spanning a wide range of different fields and is committed to developing research assessment policies that are transparent and equitable. While recognising that disciplinary differences mean that a one size fits all policy is not achievable or appropriate, there are nevertheless a core set of principles that underpin research assessment when making decisions on hiring and career progression. As a signatory of the San Francisco Declaration on Research Assessment (DORA), the University is committed to improving the way researchers and research outputs are evaluated. Assessing research quality or excellence is primarily a qualitative process carried out by expert evaluators and in some disciplines this is often supported by quantitative measures or metrics. However, reliance on metrics or their uncritical use can result in biased or unfair assessment and it is important that research evaluation policies recognise the limitations of any metrics employed.

The following principles are drawn from several sources, including the Metric Tide, The Leiden Manifesto and the perspective of some key research funders (see for example UKRI and Wellcome Trust's guidance). They serve as a guide to Departments and Faculties when considering the responsible use of metrics in any research assessment process.

  • If research metrics are employed, they should only be used to inform and support but not supplant qualitative expert assessment. The limitations of any metric must be recognised and acknowledged.
  • If quantitative metrics are employed as part of the research assessment process this must be explicitly stated in all documentation for assessors and those being assessed. The metric source, underlying method applied and how it will be interpreted must be clearly stated.
  • Any metrics used must be appropriate for the research discipline or field and must be applied at an appropriate level of granularity. For example, journal level metrics such as the Journal Impact Factor should never be used as a surrogate measure of the quality of individual research outputs or used to assess an individual.
  • If metrics are used in processes comparing different individuals, those that do not account for differences in career stage or other variations in individual circumstances should not be used. Any metric that introduces bias when comparing individuals should be avoided.
  • Any metrics used and the datasets underpinning them must be regularly reviewed to ensure they remain fit for purpose.

 

Related document/s: