People have been producing World University Rankings for 15 years. But the eminent organisations who produce rankings face structural, theoretical and practical challenges which can sometimes make the process look more like a dark art than science.
Kaycheng Soh’s World University Rankings: Statistical Issues and Possible Remedies faces these challenges head on and proposes measures to improve the integrity of the project.
Subsequent to the appearance of the Academic Ranking of World Universities at the Shanghai Jiaotong University, there soon emerged the QS World University Rankings and the Times Higher Education World University Rankings. These three ranking systems are considered the classics as they are the fore-runners, although no less than ten new systems have come to the arena.
The various ranking systems adopt a common approach of weight-and-sum to process the indicator data. Each system, somewhat arbitrarily, decides on a set of indicators and assigns different weights to these, presumably reflecting their relative importance. This simple (and simplistic) approach meets well common sense. And, in fact, much of the discussion on world university rankings is conducted at the commonsensical level.
However, analyses conducted in the recent years uncovered several problems of the prevalent approach: spurious precision, mutual compensation, weight discrepancy, indicator redundancy, etc., which render the overall scores and ranking suspect in terms of validity. These are due to systems ignoring the fact that world university rankings are a form of social measurement and therefore need be seen from this perspective.
Moreover, rankings encourage competition and, in the highly competitive world of today, it is natural that institutional attention is focused on the ranking results. By now, the original purpose of world university ranking seems to have been overshadowed, and world university rankings look more like international academic contests, as though they are annual sports meets.
This book collects together many articles pertaining to the identified measurement and statistical issues of world university rankings and suggests remedies to make ranking results more trustworthy.
Don’t Read University Rankings Like Reading Football League Tables: Take a Close Look at the Indicators
- Don’t Read University Rankings Like Reading Football League Tables: Taking a Close Look at the Indicators
- Mirror, Mirror on the Wall: A Closer Look at the Top 10 in University Rankings
- Profiling Universities, Not Only Ranking Them: Maximizing the Information of Predictors
- World University Rankings: Take with a Large Pinch of Salt
- The U21 Rankings of National Higher Education Systems: Re-Analyze to Optimize
- Book Review: U21 Ranking of National Higher Education Systems:
- World University Rankings: What is in for Top 10 East Asian Universities?
- Simpson’s Paradox and Confounding Factors in University Rankings: A Demonstration Using QS 2011–2012 Data
- Nominal Versus Attained Weights in Universitas 21 Ranking
- Times Higher Education 100 Under 50 Ranking: Old Wine in a New Bottle?
- Misleading University Rankings: Cause and Cure for Discrepancies Between Nominal and Attained Weights
- Rectifying an Honest Error in World University Rankings: A Solution to the Problem of Indicator Weight Discrepancies
- Multicollinearity and Indicator Redundancy Problem in World University Rankings: An Example Using Times Higher Education World University Ranking 2013–2014 Data
- Problems of Indicator Weights and Multicolinearity in World University Rankings: Comparisons of Three Systems
- What the Overall Doesn’t Tell About World University Rankings: Examples from ARWU, QSWUR, and THEWUR in 2013
- Nearing World-Class: Singapore’s Two Universities in QSWUR 2015/16
- Improved Academic Excellence Leading to Lower Rankings and Vice Versa: Examples of Inconsistency from Three World University Ranking Systems