Roger Watson, Editor-in-Chief
- There was no ethical permission to conduct the study
- The h-index is a very narrow measure of performance and there are better measures of academic performance
- We named individuals and, in any case, some professors do not need to demonstrate a publication record
Why would we need permission to conduct a bibliometric study which required access to - and reporting of - information that is in the public domain? Databases such as Scopus, Web of Science and Google Scholar exist to provide information about publications, citations and the individuals who contribute to those databases or - in some notable cases - don't. Those who can be found on these databases - and some of those who can't be - are publicly funded individuals whose performance on a key indicator of academic performance cannot possibly be considered private and confidential; sensitive it may be, but that it another issue. If our detractors are concerned about the lack of ethical permission then I'd welcome this being put to the test and the routes open to them are the chairs of the ethics committees in our universities, the Committee on Publication Ethics or the publishers of JAN.
I completely agree, the h-index is a very narrow measure of performance. It is precisely defined, and we once again rehearsed its calculation in the editorial. But such precision should not be confused with lack of utility. It may seem very 'deconstructive' to use such a narrow metric but the more 'constructive' alternatives - none of which have been explained in any detail in the present debate - are likely to rely largely on some other metric or metrics - with plenty of room for debate - or on an element of subjectivity. This is a classic example of the 'uncertainty principle' whereby the more we know about one thing the less we can know about another; in Heisenberg's case either the speed or position of an election...but not both. With regard to publication metrics and academic performance we seem to think we know what people have contributed to their field and are happy to exchange generalities about what our colleagues have done - or not done - and, clearly, reputations and careers are built on this. On the other hand, when we select a specific and precise metric, things often look different and precision seems to upset people; possibly those who don't perform particularly well on that metric. We make no claims about the h-index other than it is what it is: a measure of citations related to number of publications that is remarkably difficulty to skew either by publishing more, increasing total citations or self-citation. In our view; what's not to like?
We named people
As indicated, I am delighted that we have elicited such widespread response to our editorial. I can well imagine that many are happy to 'lob grenades' from the Twittersphere, and I am generally as guilty as anyone of that. Others will take the high road of 'wouldn't grace it with a response'. I also hear that the study is inaccurate; if so, let us know where and we will correct the supplementary material. I would welcome further entries to JAN interactive on the issue and if a group of detractors wish to mount a constructive defence of the alternative position then the editorial pages of JAN or open to them.
You can listen to this as a podcast.
Watson R, McDonagh R, Thompson DR (2016) h-indices: an update on the performance of professors of nursing in the UK Journal of Advanced Nursing doi: 10.1111/Jan.12924