Leading on research assessment

The University of Liverpool, my University, is currently developing a Code of Practice for the Annual Assessment of Individual Research and Impact Performance. There is an internal consultation and this post is my open contribution to this consultation process following a short presentation  (this morning) of the draft code to staff of my Institute by The Faculty APVC for Research and Impact, Prof Malcolm Jackson.

A draft document is available on the intranet for University of Liverpool staff. Whilst I cannot share the draft code (an internal University document currently under consultation), I can reveal that the motivation, spirit, and managerial consequences of the code are entirely dictated by the concepts of “excellence” and “increasing the number of 4* papers” ahead of the next Research Excellence Framework (REF). For non UK readers, REF is a national exercise of evaluation of the research carried in Universities which happens every few years and which has important implications in term of core funding (more via Wikipedia here).

The point I made in the meeting this morning, and which I now reiterate, is that this attitude to management of research, focusing on excellence and 4* papers, is not visionary and not world leading. It is what everyone else is doing with disastrous consequences. It is not setting the scene and it is not ambitious.

We, as a scientific community, are facing serious challenges since a very large portion of the research we produce is not reproducible. This is now so regularly in the news that a specific link becomes unnecessary. In an excellent (SIC) and timely paper published yesterday and entitled “Excellence R Us: University Research and the Fetishisation of Excellence”, Samuel Moore and colleagues write (abstract):

 We trace the roots of issues in reproducibility, fraud, as well as diversity to the stories we tell ourselves as researchers and offer an alternative rhetoric based on soundness. “Excellence” is not excellent, it is a pernicious and dangerous rhetoric that undermines the very foundations of good research and scholarship.

An Institution of the size of the University of Liverpool could be leading by advocating and adopting a scientific culture [1] that promotes soundness. John Ioannidis explains both the problems with our research practices and potential solutions in his Berlin Institute of Health Annual Special Lecture  given a few weeks ago. In particular, from 1:11:37, he discusses the “future re-engineering of our reward system”:


As a very minimum requirement, the University should adopt the San Francisco Declaration on Research Assessment, and the principles of that declaration should be reaffirmed in the draft code.

In considering a new code of evaluation, it is essential to consider the impact of assessment on research practices. We need to change the incentives to improve research and this consultation could be an opportunity to do that: let’s the University of Liverpool be part of those that shape the future of how we do research.

UPDATE: Stephen Curry has kindly pointed to this very relevant document from Imperial College: Application and Consistency of Approach in the Use of Performance Metrics, A report by the Associate Provost [Institutional Affairs] December 2015

[1] see also, The Culture of Scientific Research, Nuffield Council on Bioethics, 2014

 

 

Advertisements

One comment

  1. A comment from afar, as I am away at a Gordon Research Conference (Fibroblast Growth Factors), which is exceptionally robust and collegial – a real hot bed of discussion and ideas.

    There is nothing wrong with putting a rough scale on papers that you have read, since critically assessing research is part of our daily work. This is a dynamic process. A paper we put into the bottom draw today may be at the heart of several lab meetings next year and vice versa. Of course the worst scenario, which is too common, is the ‘hot’ paper, which on scrutiny turns out to have serious problems – one only has to go to PubPeer.com to see the scale of this problem.

    I was deeply involved in the last REF, where the university did extremely well in the clinical/life sciences areas compared to ALL previous RAEs and this was down to one simple fact; we read papers, lots and lots of them (for Clinical Medicine, this was >8×240 papers)

    I know of one absolutely certain fact: statements such as “excellence and 4* papers” are often (but not if the papers have been read) made as a shorthand for not bothering to read papers. In the latter case judgement generally relies on a fallacious measure of scientific importance, such as journal impact factor. The use of a mean (impact factor) to describe data (citations) that fit an exponential decay function, can only be due to one simple fact: the decision to use the mean has been made by individuals who have failed (if British) to complete KS2, where concepts of mean, normal distribution, median and mode are first introduced in the National Curriculum. I am, therefore, quite confident that this is impossible in our University, since senior management are educated many years beyond KS2. After all, analysis of quantitative data using appropriate tools is integral to our research-led teaching.

    So while I eagerly await the fulfilment of the University’s ambition to sign up to DORA, I am less concerned about what you report from this meeting. It is clearly not possible that our university would engage in evaluation of published output though any means than careful reading.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s