Saturday, 20 February 2016

h-index storm

Roger Watson, Editor-in-Chief

The recent JAN editorial on which I led (Watson et al. 2016) has caused a Twitterstorm of protest with a few supportive entries. I make little distinction; I am grateful for the criticism and the support which all adds to the debate on this important topic. We didn't expect to win a popularity contest but I do think that it is worth analysing the criticism and commenting. The criticism seems to fall under the following broad headings:
  • There was no ethical permission to conduct the study 
  • The h-index is a very narrow measure of performance and there are better measures of academic performance 
  • We named individuals and, in any case, some professors do not need to demonstrate a publication record 

Ethical permission

Why would we need permission to conduct a bibliometric study which required access to - and reporting of - information that is in the public domain? Databases such as Scopus, Web of Science and Google Scholar exist to provide information about publications, citations and the individuals who contribute to those databases or - in some notable cases - don't. Those who can be found on these databases - and some of those who can't be - are publicly funded individuals whose performance on a key indicator of academic performance cannot possibly be considered private and confidential; sensitive it may be, but that it another issue. If our detractors are concerned about the lack of ethical permission then I'd welcome this being put to the test and the routes open to them are the chairs of the ethics committees in our universities, the Committee on Publication Ethics or the publishers of JAN.


The h-index

I completely agree, the h-index is a very narrow measure of performance. It is precisely defined, and we once again rehearsed its calculation in the editorial. But such precision should not be confused with lack of utility. It may seem very 'deconstructive' to use such a narrow metric but the more 'constructive' alternatives - none of which have been explained in any detail in the present debate - are likely to rely largely on some other metric or metrics - with plenty of room for debate - or on an element of subjectivity. This is a classic example of the 'uncertainty principle' whereby the more we know about one thing the less we can know about another; in Heisenberg's case either the speed or position of an election...but not both. With regard to publication metrics and academic performance we seem to think we know what people have contributed to their field and are happy to exchange generalities about what our colleagues have done - or not done - and, clearly, reputations and careers are built on this. On the other hand, when we select a specific and precise metric, things often look different and precision seems to upset people; possibly those who don't perform particularly well on that metric. We make no claims about the h-index other than it is what it is: a measure of citations related to number of publications that is remarkably difficulty to skew either by publishing more, increasing total citations or self-citation. In our view; what's not to like?


We named people

Yes, we did and we are not the first to do so; read some of the previous editorials that we cite, they did too. There seems to be no issue about naming people if we are pointing out good performance. Why complain about naming people whose h-index performance is low, or non-existent? The profession and the public who fund our work need to know. We make no other judgement about those named; possibly exceptional managers or administrators or leaders in fields, other than with regard to their publications. But we maintain that publication is a fundamental attribute of a professor in any field. The point that those whose main responsibility is teaching should not be required to have publications surely cannot apply in a university setting. Recognition through award of chairs for excellence in teaching is laudable but the criteria for these chairs - and I have been involved in externally evaluating many applications for promotion to chair by the teaching route - invariably require scholarship and how else is that to be demonstrated other than by publication? And, with specific reference to our editorial, we used a database which records books and chapters and citations to those in the metrics. A professor by the teaching route or anyone leading other academics at a senior level such as dean or pro vice-chancellor should surely have written a book or even a chapter in one. We highlighted some senior individuals in nursing academia who are invisible in the publication domain; their visible and enduring contribution to scholarship is precisely zero. Do we have to comment?

As indicated, I am delighted that we have elicited such widespread response to our editorial. I can well imagine that many are happy to 'lob grenades' from the Twittersphere, and I am generally as guilty as anyone of that. Others will take the high road of 'wouldn't grace it with a response'. I also hear that the study is inaccurate; if so, let us know where and we will correct the supplementary material. I would welcome further entries to JAN interactive on the issue and if a group of detractors wish to mount a constructive defence of the alternative position then the editorial pages of JAN or open to them.

You can listen to this as a podcast.

Reference

Watson R, McDonagh R, Thompson DR (2016) h-indices: an update on the performance of professors of nursing in the UK Journal of Advanced Nursing doi: 10.1111/Jan.12924


No comments:

Post a Comment