Recently, The Hindu published an article that raised a pertinent question regarding the National Institutional Ranking Framework (NIRF): “Is quantity trumping quality?” As I write from the hallowed halls of one of India’s premier institutions—the Indian Institute of Technology, Madras—I find myself compelled to delve deeper into this issue. Are private institutions genuinely improving, or have they [sic] discovered ways to manipulate research metrics?
It is often said that data is the new oil, but just like oil, it needs refinement to be of any use.
The Illusion of Metrics
NIRF rankings have become a vital element of academic evaluation in India, but they are not without their controversies. One significant point frequently overlooked is the potential disconnect between research output and its true impact. To be clear, I am not comparing the fee structures of public institutions like Jadavpur University, which once had annual tuition as low as ₹3,000, with private institutions like VIT that charge significantly more. More funding certainly enables the creation of superior infrastructure. However, the question remains: How is the quality of academic work truly being measured?
This raises the question: Are we merely counting the number of publications, or are we genuinely evaluating their substance? The NIRF leans heavily on metrics such as the number of publications, faculty-student ratios, and other numerical indicators. But does this approach capture the true essence of academic excellence, or are we simply engaging in numerus stultorum—the counting of fools?
A U.S. Parallel: Electoral Misrepresentation
For perspective, let’s consider the demographics of the United States. Population density is highest along the coasts, while the “flyover states” in the center are sparsely populated. However, voting power is not proportionate to population density—every person gets one vote. The result is that the fate of the country can be shaped by a smaller population in states like Wyoming, rather than by populous states like New York or California. Similarly, simplistic metrics in university rankings can distort the picture, giving undue weight to institutions that excel in quantity but fall short on quality.
Just as the U.S. electoral system is criticized for its disproportionate distribution of power, the NIRF ranking system could be subject to similar critiques. The focus on a narrow set of metrics risks ignoring the multidimensional nature of academic success.
The Chen et al. Paper: MIT as a Confidence Marker
I read a paper by Chen, Manolios, and Riedewald titled Why Not Yet: Fixing a Top-K Ranking That Is Not Fair to Individuals here, the authors explore how omitting a top institution, such as MIT, from a ranking of U.S. universities in Computer Science could undermine confidence in the ranking system. The absence of such a heavyweight would signal flaws in the evaluation process.
Applying this to India, if IITs suddenly disappeared from the upper echelons of the NIRF rankings, our confidence in the system would be shaken. While the rankings themselves may not be incorrect, they would suggest that the current metrics are inadequate. Chen and Manolios argue for factoring in more attribute-rich ranking process. These attributes can be anything ranging from industry partnerships, global research outreach, to international collaborations to offer a more holistic view of an institution’s true value.
The Problem with Simplified Metrics
It’s not just about the numbers; it’s about the meaning behind them. For example, publishing k
papers in low-impact journals is not necessarily more valuable than publishing k
paper in a prestigious, high-impact journal. Yet, NIRF often treats these two scenarios as comparable, favoring quantity over quality. A similar issue exists with U.S. News & World Report rankings, where factors like alumni donations and faculty salaries sometimes overshadow more meaningful indicators like student outcomes and teaching effectiveness.
We’ve all heard the adage “There are three kinds of lies: lies, damned lies, and statistics.” The danger of ranking systems lies in their reliance on simplistic metrics that only tell part of the story. A university might boast a high number of graduates, but if those graduates are poorly prepared for the workforce, what does that figure really signify? Similarly, an institution may publish a high number of research papers, but if those papers have minimal impact, can it truly claim to be advancing knowledge?
The faculty student ratio
As global demand for students rises and India accelerates its growth in manufacturing and aims for a stronger economy, the need for an increasing number of students becomes evident. NIRF places significant emphasis on the student-faculty ratio. In response, the need for highly qualified and in-demand professors is at an all-time high. However, a critical issue persists—there is a shortage of individuals pursuing careers in academia.
While engineering remains a popular field of study, drawing large numbers of students, the academic profession is not witnessing the same level of enthusiasm, leading to a growing disparity. This imbalance, where more students are entering the field without a corresponding rise in faculty, could have serious consequences. If the NIRF continues to penalize institutions with low faculty-to-student ratios, it should serve as a wake-up call for both society and the government to incentivize and encourage more individuals to pursue academic careers.
Toward a More Nuanced Ranking System
What we need is a ranking system that reflects the full complexity of academic institutions. NIRF and other ranking bodies must move beyond binary, yes-or-no metrics. They need greater flexibility to account for the intricacies of academic output. More attention should be given to the quality of research, the societal impact of innovations, and the international reputation of faculty and alumni. While I am not suggesting that Western scholarships are a perfect metric, identifying reputable international conferences and factoring them into rankings would offer a clearer image of an institution’s standing.
In a world increasingly driven by data, the challenge lies in ensuring that the data tells the complete story—not just the parts that are easy to quantify.