Understanding world university rankings

    Sarwar J. Minar, Senior Officer, International Programs and Relations, IUB

    Sarwar J. Minar, Senior Officer, International Programs and Relations, IUB : The idea of rankings among the universities across the world has become very popular nowadays. University rankings present an overview of the performances of the universities world-wide, reveal various strategic information about the universities which are of invaluable significance. Though the popularity of the university rankings have increased manifold, the ranking contents are not that well known.
    University ranking is a systemic method of ordering universities based on a structured evaluation of their performance in various areas which include, but not limited to, teaching, research, number-composition-ratio of faculty and students, student employability, internationalization etc. Ranking authorities usually prepares a 100-point ranking table to compare universities’ performance. A world university ranking is world-wide in nature.
    According to Forbes, there are five university ranking systems that take a global approach, which include QS World University Ranking, Times Higher Education World University Ranking, Academic Ranking of World Universities, the Ranking Web of Universities Ranking, and the CWTS Leiden Ranking. As Leiden Ranking is comparatively least known and used, it would not be discussed here.
    Academic Ranking of World Universities (ARWU), popularly known as the Shanghai Ranking, was first published by the Center for World-Class Universities, Shanghai Jiao Tong University, China in 2003. Since 2009, ARWU has been being published by Shanghai Ranking Consultancy. ARWU ranks more than 1200 universities worldwide every year and the best 500 are published. The ARWU 100-point ranking table used in 2016 included: Quality of Education (10%), Quality of Faculty (40%), Research Output (40%), and Per Capita Performance (10%).
    ‘Quality of Education’ is measured considering all the alumni winning Nobel prizes and field prizes. ‘Quality of Faculty’ is measured considering two things (20% each): a. Staff of an institution winning Nobel Prizes and Fields Medals and b. Highly cited researchers in 21 broad subject categories (selected by Thomson Reuters). ‘Research Output’ is also measured considering two things (20% each): a. papers published in Nature and Science and b. papers indexed in science and social science citation index (only articles in last 5 years). ‘Per capita Academic Performance of an Institution’ is measured as weighted scores of the above five indicators divided by the number of full-time equivalent academic staff.
    Second, the QS World University Ranking is the one that Quacquarelli Symonds (QS) started publishing after breaking up with the joint THE–QS World University Rankings. The QS ranked 959 universities in 2017 in 84 countries. The QS 100-point ranking table contains: Academic Reputation 40%, Employer Reputations 10%, Faculty Student ratio 20%, Citations per Faculty 20%, and Proportion of International Faculty & Student 10%.
    Third, Times Higher Education (THE) Ranking, is the one that THE, a London based magazine, publishes annually since 2010. After breaking up with the joint THE–QS Ranking, THE designed a new methodology and parternered with Thomson Reuters for ranking data. THE ranked top 1102 universities in the world in 2018. THE 100-point ranking table includes: Teaching (the learning environment) 30%, Research (volume, income, reputation) 30%, Citations (research influence) 30%, International Outlook (staff, student, research) 7.5%, and Industry Income 2.5%.
    THE measures ‘Teaching’ (commitment of an institution to nurturing its next generation of academics) with indicators like reputation survey (15%), staff to student ratio (4.5%), Doctorate-to-bachelor’s ratio (2.25%), Doctorates-awarded- to-academic-staff ratio (6%), and institutional income (2.25%). It measures ‘Research’ with reputation survey (18%), research income (6%), research productivity (6%) and ‘International Outlook’ with International to domestic student ratio (2.5%), International to domestic staff ratio (2.5%) and International collaboration (2.5%).
    Fourth, the Ranking Web of Universities Ranking, also known as The Webometrics Ranking of World Universities, has been being published by the Cybermetrics Lab (Spanish National Research Council, Madrid, Spain) and sponsored by the European Commission since 2004; since 2006 it is being updated twice a year (January and July). The Cybermetrics Lab measures the performance of universities from all over the world based on their web presence and impact. In July 2017, Webometrics ranked over 27,000 universities worldwide. The Webometrics 100-point ranking table (July 2017) contains: Presence 5%, Visibility 50%, Transparency (or openness) 10%, Excellence 35%.
    The question is: which rankings shall we rely on? All the rankings have beauty of their own. All the rankings are reliable and useful as per their indicators, weights, methodologies etc. However, the universities, students, and other agencies may explore which rankings are more relevant for them than the others.
    However, there are many flaws and criticisms against the rankings. Some of the main criticisms include, subject specific bias (some subjects have comparative advantage for citations and research impact than others), sources specific bias (sole reliance on bibliometric data), language-specific publication bias (considering only English language publications/journals), university-size and age specific bias (unfair advantages for larger and older universities), award bias (considering only certain type of awards), top university listing bias (listing only the top) etc.
    To improve the rankings, the ranking authorities should ensure academically fair and globally inclusive comparison and focus on measuring the basic, important and significant factors along with a balance of the indicators. They should include insights from the leaders of education sector around the world ensuring a geographical representation. They should counter the biases and unfair advantages, and create triangulation of data (depending on multiple sources). However, the ranking authorities update their ranking methodology every year to make it more rigorous and effective which is a matter of essential optimism.

    Leave a Reply

    Your email address will not be published. Required fields are marked *