Written by Tomas Corej
Illustrated by Sylvain Chan
For years, university rankings have been criticised as subjective, inaccurate, and even unscientific. Despite growing doubts about their purpose, they remain a popular tool for employees, the media, and even students and universities themselves.
After Oxford University topped the Times Higher Education ranking earlier this week, its representatives spoke of an “achievement” which reflects “the dedication of our academics, professional services staff and students.”
Its social media team was quick to update Oxford’s personal description on X (formerly Twitter) to: “Ranked the best University in the world for a record tenth consecutive year.”
At the same time, the London School of Economics (LSE) celebrated the award of the number one university in the United Kingdom selected by The Times and Sunday Times Good University Guide. Per LSE President Larry Krammer, it exhibits “the dedication and talent of our faculty, staff, students, and global community.”
At first glance, it seems to be a rather bizarre situation. How could Oxford be named the top university in the world in one ranking, while failing to be selected the best university in the UK in another? In fact, Oxford is ranked fourth in the Times and Sunday rankings. Simultaneously, LSE only finished in 52nd place according to the international Times Higher education ranking.
What are university rankings?
There are two large groups of academic rankings – the international ones and the national ones.
With regards to the former, there are three main rankings:
- The aforementioned Times Higher Education University Rankings, known as the THE Rankings, according to which LSE has deteriorated over the past decade. While just a few years ago, it ranked among the top 25 universities in the world, this October, it fell to 52nd place.
- The so-called Academic Ranking of World Universities (AWRO), commonly referred to as the Shanghai Ranking, according to which LSE landed between 151th and 200th spot in the last year. The rank has been dominated by the universities based in the United States, with Harvard University, Stanford University, and Massachusetts Institute of Technology (MIT) on top. Two British universities (Oxford and Cambridge) secured a spot in the top 10 last year.
- The rankings produced by a London-based services provider of analytics Quacquarelli Symonds, known as the QS World University Rankings. LSE secured the 56th position this year, with MIT securing top spot and Imperial College London coming second.
While their methodologies differ, as the Higher Education Policy Institute explained, the international rankings are based primarily on research-related criteria. Among the criteria are indicators such as citations, published papers, and academic reputation, but also employability.
There are domestic rankings, among which two have attracted significant attention:
- The Times and Sunday Times Good University Guide which has named LSE as the top university in the United Kingdom twice in a row.
- The Guardian University Guide, according to which LSE was placed 4th in the United Kingdom this year.
Among national rankings, there exists “a greater variety of indicators” compared to international rankings. While criteria differ, they usually include student feedback, expenditure per student, and student-to-staff ratio.
So, how important are these rankings?
Chen-Ta Sung, Doctoral Researcher in the Department of Psychological and Behavioural Science at the LSE, claims that rankings “can be useful”, depending on what one is looking for.
At the same time, he does not consider any of the rankings inherently superior to another. “It really depends on which factors matter most to you – and you can then choose the ranking table that best reflects those priorities,” he tells The Beaver.
Importantly, Sung explains that rankings may be important for those who are seeking their first professional job. When he worked in a corporate setting in London and was involved in hiring new graduates, “the university someone attended was one of the factors we considered during the initial screening process”.
“However, when recruiting experienced hires, we placed much greater emphasis on a candidate’s work experience rather than their educational background. In addition, certain subjects, such as STEM, may carry more weight than overall university rankings,” he continues.
This opinion resonates with some students. Yifei, an exchange student from the United States, tells The Beaver he monitored rankings while choosing to spend a year at the LSE. Similarly, he considers rankings to be important for his future career prospects. However, he clarifies that he looks “at the subject first and then at the overall ranking”, prioritising departmental performance.
That said, researcher Sung emphasises the perceived usefulness of university rankings “can vary across cultures”. He illustrates that “credentialism” is dominant in some societies such as India and China, which may have unintended consequences such as exacerbating inequality.
“The more prestigious the university, the higher the tuition fees tend to be. Students from affluent families often face less financial pressure to obtain such credentials,” he continues.
In societies that place high value on university rankings, this dynamic can further disadvantage those from less privileged backgrounds,” Sung concludes.
Defenders of international rankings usually point out four things. According to the Chief Global Affairs Officer at the Times Higher Education Phil Baty, they:
- Support government policymaking;
- Support university leaders’ institutional strategy;
- Support emerging universities from the Global South;
- Facilitate international cooperation.
Yet whether international or British, some researchers are skeptical of their usefulness.
Among the critics is Berend van der Kolk, Associate Professor at the Accounting department of Vrije Universiteit Amsterdam who published a book on the contemporary preoccupation with performance metrics.
“I do not recommend any rankings at the university level,” he tells The Beaver.
The rankings, as Van der Kolk explains, “impose a competition frame” and enable the private sector “to dictate what universities should do and be”. He is convinced that they ultimately lead to what he calls “Indicatorism” – a focus on improving rankings while losing sight of the original goals of universities.
“Rankings come with a feel of objectivity but are highly subjective”
While the methodological differences between different rankings are large, many of them – especially the international ones – emphasise not only quality, but also the quantity of research.
“Unsurprisingly, universities press academics to publish more papers, and citation databases such as Scopus display sharp increases in the yearly number of publications,” Van der Kolk claims.
It is true, he acknowledges, that rankings may reflect consistency in publishing in leading academic journals and receiving outstanding evaluations. He concludes: “Strong connections with local communities are typically absent in rankings, while for universities and communities, it is an important issue.”
This sentiment is echoed by several LSE students. Leo told The Beaver that, according to them, many universities which score high in rankings tend not to be “good at teaching” as they put excessive emphasis on research: “Faculty have so much pressure and competition to get funding.”
Leo candidly remarks that their decision to choose the LSE was ultimately influenced by rankings: they do matter because of “networks and employers perceptions.”
An increased criticism of primarily international rankings has made some universities act.
Over the past year, several European universities, including Utrecht University and the University of Zurich, and most recently France’s Sorbonne University, announced that they would leave the THE Rankings.
Furthermore, an initiative More Than Our Rank was established by universities wishing to demonstrate “a commitment to responsible assessment and to acknowledging a broader and more diverse definition of institutional success”. The LSE was not among signatories.
Alongside other concerns, there are some who claim that ranking universities hierarchically is wrong per se. For example, Jelena Brankovic, Senior Researcher at the Robert K. Merton Center for Science Studies at the Humboldt-Universität in Berlin, noted in her LSE Impact Blog that “carefully calibrated” methodologies of evaluating quality of universities are there “to convince us that rankings are rooted in logic and quasi-scientific reasoning”.
The Beaver reached out to Brankovic, who did not dismiss the usefulness of rankings altogether. “If students find them useful, then they are useful, even if they are misleading. If some students don’t find them useful, then they are not useful for those students,”
So, do students find them useful?
Stefan never considered rankings when applying to the LSE. “I did consider the LSE’s reputation but I don’t have any interest in specific rankings,” he summarizes.
On the other hand, Christine explains that due to her interest in research and academia she found rankings useful. “I got into SOAS as well, but SOAS has a lower ranking so I decided to come to LSE”.
At the same time, she emphasises the importance of other criteria, including networking opportunities and the programme length. Christine was also admitted to the Sciences Po university in Paris, which scores higher than the THE ranking. However, she was ultimately persuaded by the LSE’s focus on academic training and the fact that its postgraduate programme only lasts one year.
To sum up, rankings keep having a great deal of influence, leaving a lot of students little choice but to consider them. But at the time when even their defenders admit their shortcomings and most experts caution against their unwanted societal effects, their purpose indeed deserves further consideration. Such continued debate is greatly needed at the LSE, too.