The Highs and Lows of Academic Institutional Rankings

Ranking of educational institutions is popular today. Publications like Outlook, India Today, Dataquest etc., have been doing it for over a decade, primarily aimed at admissions into the bachelors programs. Times Higher Eduction, QS, and Shanghai rank international universities and other academic institutions, with strong emphasis on their research impact and international presence. It’s a given that higher rankings attract greater visibility and bragging rights, with possible advantages of drawing in the best undergraduate and post-graduate students, high-calibre faculty members, bigger research funds, charitable donations, and all things good. The allure of being ranked among the top global institutions is hard to resist for anyone.

Indian institutions don’t fare very well in the global rankings due to several reasons – lacking in high-impact research and international faculty and students among others. The Ministry of Human Resource Development (MHRD) released a National Institutional Ranking Framework (NIRF) in 2015 according to which educational institutions in the country are being ranked since 2016. These ranks have attained high importance instantly since government funding schemes and institutional autonomy may soon depend on them.

Globally, several reputed academicians have warned against the ill-effects of institutional ranking. No surprises if the spirit of education is compromised by a relentless pursuit of maximizing quantitative parameters to be in the ranks. In the Indian context, however, it can be argued that some evaluation could be better than none and NIRF can serve that role. Here, Prof P J Narayanan, Director, IIIT-Hyderabad presents his analysis of the NIRF ranking scheme and how it treats different types of institutions.

Ranking diverse institutions based on any fixed criteria is risky to begin with. There’s no single scheme that captures the strengths and weaknesses of all types of institutions, ranging from a small institution concentrating on a niche subject area to a large one with hundreds of affiliate colleges. Broadly, three institutional characteristics are important to understand any ranking system: its size, its subject areas of focus, and its administrative/ownership model.


The size of an academic institution in terms of the number of students and faculty is an important measure in the overall impact it can have on society. Larger institutions have more students and alumni, who will do diverse things in their professional lives, creating high, long-term impact. Traditionally, universities are large  institutions with ten or more thousand students pursuing multiple disciplines. Oxford University – ranked number 1 in the world in 2017-18 by the Times Higher Education (THE) – reports over 23,000 students on their rolls. Cambridge University, ranked second, reports close to 20,000 students. Caltech, ranked number 3, has only 2,200 students majoring primarily in Science and Engineering subjects, however. THE’s ranking scheme quite obviously normalizes for the size of the institution. This recognizes the fact that small, high-quality, institutions with narrow focus areas can also be very influential.

India has several highly sought-after institutions that are narrowly focussed, such as the IITs (high Engineering focus and some Science focus), IISc (high Science focus and some Engineering focus), NITs (primarily only Engineering), IISERs (only Science), IIITs (focus only on Information Technology), AIIMS (primarily focussed on medical education), etc.

The NIRF ranking follows a mixed scheme on normalization with respect to the size of the institution, judging from their published formula. Several factors do not seem to be normalized to the size, however. For example, total student strength (SS), graduated PhD students (GPHD), earnings from patents (IPL), projects and professional practice (FPPP), and competitiveness (PRCMP), etc., depend on the numbers of total students, total PhD students, total earnings, etc., without taking institutional size into account. This favours large institutions, which will have larger numbers for many factors, irrespective of the quality of academics involved.

Subject Areas of Focus

Comparison on a common base is difficult when different institutions focus on different subject areas with diverse characteristics. Science areas require large investments in laboratories and consumables. Engineering fields require moderate investments, Computer Science/IT requires low investments and Humanities and Social Sciences the least. On another dimension, Humanities and Social Sciences areas typically attract large number of PhD students in India and Sciences attract a good number of PhD students. Engineering, IT, and Mathematics attract much smaller numbers of PhD students.

Along yet another axis, Engineering in general and IT in particular attract lucrative industry job offers for its graduates. Sciences attract only a few job offers and the other areas including Mathematics attract far fewer, at least when students graduate. Sciences tend to publish more papers on an average, Engineering/IT fewer, with the remaining areas publishing much fewer papers. The same, therefore, will hold for derivative measures like citation index.

The use of measures like number of publications per faculty or citations per paper, funding levels per student, etc., will be justified, if every university has a mix of all areas. The narrowly-focussed institutions provide too few common factors to compare against. The Engineering institutions will score higher on placements, the Science institutions on spending per student and on number of papers and citations. The other areas will tend to score poorly in all of the above, but perhaps better in number of PhD students.

Several factors used by the NIRF ranking scheme seem to favour certain subject areas more than others. Examples include projects and professional practice (FPPP), graduated PhDs (GPHD), competitiveness (PRCMP), financial resource utilization (FRU), placements (GPHE), etc. Even publications and citations differ greatly by areas and need to differentiated. The comparison may make sense between similar institutions (say, among IITs) but not across different types of them (for example,  even between an IIT and a IIIT).

Institutional Model

India has a few distinct institutional models in its academic institutions. The central institutions (like the IITs, IISc, IIMs, Central Universities, etc.) are generously funded and share common admission considerations. The state institutions (with and without affiliation) receive less generous funding and are usually constrained in alternative resource generation. The admission constraints also vary from State to State, but most are required to admit students only or mostly from the State. The remaining — “private” institutions — display considerable variety in their models, from the well established institutions like Banasthali University or BITS Pilani to the young ones like Ashoka University or Bennett University. There are institutions with significant endowments like Shiv Nadar University to the public-in-essence institutions like IIIT -Hyderabad and Chennai Mathematical Institute.

There are also women-only universities and minority institutions with a unique mix of student populations. Institutional variety is a positive factor for the country and students. However, comparing institutions across diverse models for common ranking will not give satisfactory results. NIRF framework uses the FRU factor as a measure of Financial Resource Utilization per student, with components for capital and recurring expenses. Large government institutions tend to be inefficient in running expenses as well as capital expenses due to several issues. The recurring expenses of many such institutions include a large pension burden from the past, which doesn’t benefit students in any way. Rewarding higher expenses can end up in rewarding inefficiency indirectly.

The regional diversity (RD) score disfavours most state institutions. In fact, only an institution with all foreign students will get full marks in RD, given its current definition! While the ideal of 50% women students and 20% women faculty is laudable, engineering/science institutions and humanities/social science institutions have very different dynamics in this regard. The ESCS score is based on the fraction of socially or economically challenged students, with 50% being the ideal. This is a strange factor as a majority of institutions are required to admit such students by law. Consider institution A with 50% of such students  mandated by law and institution B with 15% such students due to positive steps taken by them even when not mandated by law. Which institution needs to be recognized more, A or B?

Other issues

The NIRF scheme uses a number of assumptions that are questionable, in addition to the systematic issues listed above. What is the basis of setting 1:1:1 as the ideal ratio of numbers of faculty with different experience levels or 1:3 as the ideal ratio for capital to recurring expenses? Do they favour new institutions or established institutions? Besides, there is lack of clarity on several factors used such as the definitions of economically or socially challenged student, top university, etc.

Is the median salary (GMS) of the graduates of the institution understood correctly? By definition, the median salary is 0 if 50% of the students don’t get a job when they graduate. Are institutions reporting it correctly or  are they giving the median salary of  only students who got a job through placements?A quick perusal of the data reported by institutions makes one believe that the mean salary of those who are placed through the institution is being reported by most.


Ranking institutions does have value in India even if it is an imprecise science. However, comparing diverse institutions that we have today on a common base may not be meaningful. No single evaluation criterion will fits all kinds of institutions we have in the country as argued above.

All measures need to be normalized to the institutional size as it doesn’t make sense to compare the numbers of a large, multidisciplinary university with those of an institution in a niche area. The THE scheme seems to normalize every term to the number of students or the number of academics. It may also be good idea to rank large (say, with more than 5,000 students), medium (between 2,000 and 5,000 students), and small institutions separately.

The categories NIRF uses today – namely, Engineering, Management, Pharmacy, Universities – are  insufficient for the comparisons the ranking scheme is proposed to be used. Institutions predominantly focussed on IT or Social Sciences or Mathematics can be ranked within others in the same category for better inter-institution comparison. Another option is to have subject-wise ranking for each institution as is done by the QS World University Ranking scheme. This involves a lot more effort, but ensure greater relevance to inter-institutional comparison of the ranks.



Leave a Reply

Your email address will not be published. Required fields are marked *

Next post