Open Access
Subscription Access
Construct Validity Maps and the NIRF 2019 Ranking of Colleges
In this study, we prepare construct validity maps from the National Institutional Ranking Framework (NIRF) 2019 data for the top 100 colleges in India. Tamil Nadu, Delhi and Kerala together have a disproportionate 82% share of the top-ranking colleges in the country that participated in the 2019 exercise. The higher education system in India comprises about 52,000 units of assessment from universities, premier institutes of technology and colleges to stand-alone institutions, and many participate in the NIRF exercise. The NIRF score is computed from five broad parameters, of which one is a peer review-based perception score for participating institutions. Using its teaching, learning and resources parameter as a proxy for teaching and learning resources input and its research and professional practices and graduation outcomes parameters as proxies for teaching and research outputs or outcomes, we also compute a quality or excellence proxy and from this compute a second-order Xscore. The three scores, NIRF, perception and X are used in the context of construct validity to construct two-dimensional maps to determine how the top colleges are placed with respect to each other. A quantitative estimate is obtained using Peirce’s measure of predictive success to determine if the use of one construct measure to predict another is acceptable or not. In terms of the construct validity paradigm, we are able to recognize possible biases in the peer review perception scores and also recommend that the Xscore, which is based on an input–output model, may give a better representation of reality.
Keywords
Bibliometrics, Construct Validity, Institutional Ranking, Research Evaluation.
User
Font Size
Information
- All India survey on higher education (2016–17); http://aishe.nic.in/aishe/viewDocument.action?documentId=239 (accessed on 12 April 2019).
- Prathap, G., Danger of a single score: NIRF rankings of colleges. Curr. Sci., 2017, 113(4), 550–553.
- Savithri, S. and Prathap, G., Indian and Chinese higher education institutions compared using an end-to-end evaluation. Curr. Sci., 2015, 108(10), 1922–1926.
- Peirce, C. S., The numerical measure of the success of predictions. Science, 1884, 4(93), 453–454.
- Bornmann, L., Tekles, A. and Leydesdorff, L., How well does I3 perform for impact measurement compared to other bibliometric indicators? The convergent validity of several (field-normalized) indicators. Scientometrics, 2019; https://doi.org/10.1007/s11192-019-03071-6.
- Prathap, G., Making scientometric and econometric sense out of NIRF 2017 data. Curr. Sci., 2017, 113(7), 1420–1423.
- Prathap, G., Totalized input–output assessment of research productivity of nations using multi-dimensional input and output. Scientometrics, 2018, 115(1), 577–583.
- Prathap, G., The Energy–Exergy–Entropy (or EEE) sequences in bibliometric assessment. Scientometrics, 2011, 87(3), 515–524.
- Cronbach, L. J. and Meehl, P. E., Construct validity in psychological tests. Psychol. Bull., 1955, 52(4), 281–302; doi:10.1037/h0040957.
- Cook, T. D. and Campbell, D. T., Quasi-Experimentation: Design & Analysis Issues in Field Settings, Houghton Mifflin, Boston, USA, 1979.
- Bornmann, L. and Daniel, H. D., Convergent validation of peer review decisions using the h index: extent of and reasons for type I and type II errors. J. Informetr., 2007, 1, 204–213.
- Seglen, P. O., Why the impact factor of journals should not be used for evaluating research. Br. Med. J., 1997, 314, 498–502.
- Smith, S. D., Is an article in a top journal a top article? Financ. Manage., 2004, 133–149.
- Prathap, G., Mini, S. and Nishy, P., Does high impact factor successfully predict future citations? An analysis using Peirce’s measure. Scientometrics, 2016, 108(3), 1043–1047.
Abstract Views: 339
PDF Views: 119