Research

Responsible Research Metrics

Responsible Research Metrics

Newcastle University Policy Statement on Responsible Research Metrics

Background

Newcastle University recognises that the responsible use of metrics can support the delivery of our Research Strategy’s ambition to catalyse transformative research within and between disciplines, and build a positive and thriving research culture. As such, we have produced this Policy Statement and associated guidance on appropriate use of quantitative research metrics as part of responsible research assessment.[1] The University became a signatory to DORA in 2017 and reaffirmed a commitment to the principles of DORA in the REF code of practice published in 2019. The use of research metrics falls within the context of the University Code of Good Practice in Research.

This policy statement builds on these commitments and a number of other prominent external initiatives, including the Leiden Manifesto for Research Metrics and Metric Tide report. The latter urged UK institutions to develop a statement of principles on the use of quantitative indicators in research management and assessment, where metrics should be considered in terms of robustness (using the best available data); humility (recognising that quantitative evaluation can complement, but does not replace, expert assessment); transparency (keeping the collection of data and its analysis open to scrutiny); diversity (reflecting a multitude of research and researcher career paths); and reflexivity (updating our use of metrics to take account of the effects that such measures have had).

Implementation of this statement supports our strategic ambitions, and our institutional commitment to the Concordat for Research Integrity and the Concordat to Support the Career Development of Researchers, and is supported (and in some cases, mandated) by research funders in the UK (e.g., UKRI, Wellcome Trust etc.). This statement will evolve as we develop the University’s Research Culture Action Plan in consultation with students, colleagues and external partners.

This statement is integral to our commitment to developing all members of the research community across career stages and job families. It will support our aim to cultivate a research culture that recognises diverse contributions to research, and to build a research culture that values inclusive collaboration, integrity, transparency, and global visibility.

Implementation of Responsible Metrics Statement

We recognize that Newcastle University is a diverse university, and metrics will be used in ways that are sensitive to disciplinary and local contexts. This Policy Statement is deliberately flexible to allow for the diversity of contexts, and is not intended to provide a comprehensive set of rules.

We recognize that metrics are often more appropriate to evaluate the collective contribution of teams, and are less useful in the evaluation of individual contributions which may be diverse. At the University, we endeavour to use metrics to provide a collective picture of how we are performing against our strategy.

To help put this Statement into practice, we will provide an evolving set of guidance material to include more detailed discussion and examples of how these principles could be applied. Newcastle University is committed to valuing research and researchers based on their merits, not the merits of metrics. In the future, we will provide relevant training for all members of our research community through the Newcastle University Skills Academy.

The implementation of this Statement will include establishing a review and monitoring process, enabling regular institutional review of adherence to these principles, as well as mechanisms for reporting and addressing breaches of the principles.

Guiding Principles for Use of Responsible Metrics at Newcastle University

This guidance is provided to ensure that research assessment activities are carried out in line with the three principles below.

Metrics should:

Support expert judgement

  • Research assessment should always be based on expert judgement with metrics used to support and inform this assessment.
  • Quality, influence, excellence and impact of research are typically abstract and subjective concepts that prohibit direct measurement. There is no simple way to measure research quality, and quantitative approaches can only be interpreted as indirect proxies for quality. Superficial inferences of the quality of research by use of research metrics alone in research evaluations can be misleading.
  • Irresponsible use of metrics alone can pose the risk of incentivizing undesirable behaviours, such as focusing on number rather than quality of publications,  chasing publications in journals with a high Journal Impact Factor (JIF) regardless of whether this is the most appropriate venue for publication, or discouraging the use of open research approaches such as pre-prints or data-sharing. 

Be fit for purpose

  • Data should only be obtained from reliable and accurate sources and due consideration should be given to ensure data quality.
  • Metrics that we use should align with what we value as important. For example, categories that we value include: (a) research quality, (b) research volume, income, and activity levels (critical mass), (c) research impact, and (d) research environment.
  • Disciplinary differences should be accounted for. Researchers from different disciplines have different perspectives of what characterises research quality, and different approaches for determining what constitutes a significant research output (for example, the relative importance of book chapters vs journal articles). As some metrics lend themselves to certain discipline areas to the detriment of others, we will work hard with the wider sector to ensure that research in all our Faculties is understood and its value represented, across STEM and SHAPE disciplines.
  • All research outputs must be considered on their own merits, in an appropriate context that reflects the needs and diversity of research fields and outcomes. Metrics should be used in the correct context and users should be mindful of changes in information that might affect the underlying assessment.
  • Quantitative indicators should be selected from those that are widely used and easily understood to ensure that the process is transparent and they are being applied appropriately. Likewise, any quantitative goals or benchmarks must be open to scrutiny.
  • If goals or benchmarks are expressed quantitatively, care should be taken to avoid the metric itself becoming the target of research activity at the expense of research quality.
  • Metrics that are useful for aggregate assessment may not be appropriate for use at a different scale, for example in assessment of an individual output. For example, the JIF of a journal is a useful measure of the whole content of the journal, but cannot be extrapolated to infer the quality of every article within the journal.

Be transparent

  • When metrics are used to form an assessment, the data sources and assumptions should be acknowledged. New and alternative metrics are continuously being developed to inform the reception, usage, and value of all types of research output. Any new or non-standard metric or indicator must be used and interpreted in keeping with the other principles listed here for more traditional metrics. Additionally, the sources and methods behind such metrics should be considered as to whether they are vulnerable to being gamed, manipulated, or fabricated.
  • Metrics (in particular bibliometrics) are available from a variety of services, with differing levels of coverage, quality and accuracy, and these aspects should be considered when selecting a source for data or metrics. Where necessary, such as in the evaluation of individual researchers, a source that allows records to be verified and curated should be used to ensure records are comprehensive and accurate.

 


[1] The term “research assessment” refers to the evaluation of research quality and measurements of research inputs, outputs and impacts, and embraces both qualitative and quantitative methodologies, including the application of bibliometric indicators and mapping, and peer review. Research assessment can take place across a wide range of contexts, from responding to national research assessment exercises to evaluating individual candidates during recruitment exercises.

Who to contact

Research Policy, Intelligence and Ethics, Research Strategy and Development

 

 

Policy Approved by Senate 30th June 2021