Research evaluation based on citation data identifies excellence using quantity and quality proxies as the main orthogonal dimensions. Citation data are readily available from aggregators such as the Web of Science or Scopus in the form of number of papers P, number of citations C and h-index. However, most of the prestigious award-giving bodies, e.g. for the Nobel Prizes or for the Shanti Swarup Bhatnagar (SSB) Prize in India, rely on rigorous peer evaluation rather than on citation-based performance. While peer-review judgements are holistic but also subjective, citation-based evaluations tend to be reductive and quantitative. In thisstudy, we lookat three of the most prestigious premier peer-review decision-making exercises in India, the SSB Prize, and election Fellowship of the Indian National Science Academy, New Delhi and the Indian Academy of Sciences, Bengaluru. By comparing the awardees and successfully elected Fellows against a list of outstanding citation-based top performers inthe world, we report a marked field dependence with citation data predicting 0% to less than 40% of the peer-review selections depending on the field. This study highlights two mixed dangers: one of compressing performance to numbers in an essentially reductive process, and the other of the possible bias and prejudices involved in what is often a subjective peer-review process. Thus, type I and type II errors cannot be avoided in any peer-review or citation evaluation taken in isolation,and vary from as much as 100%to 63%, depending on the field.
Keywords
Bibliometrics, Citations Data, Peer-Review Evaluation, Field Dependence, Publications.
User
Font Size
Information