QS World University Rankings

QS World University Rankings

August 25, 2019 0 By Stanley Isaacs


QS World University Rankings are annual university
rankings published by British Quacquarelli Symonds. The publisher originally released its rankings
in publication with Times Higher Education from 2004 to 2009 as the THE-QS World University
Rankings, but such collaboration was terminated in 2010, with the resumption of publishing
by QS using the pre-existing methodology and new cooperation between THE and Thomson Reuters
releasing Times Higher Education World University Rankings. Today, the QS rankings comprise both world
and regional league tables which are independent of and different from each other owing to
differences in the criteria and weightings used to generate them. The publication is one of the three most influential
and widely observed international university rankings, alongside the Times Higher Education
World University Rankings and the Academic Ranking of World Universities. History
The need for an international ranking of universities was highlighted in December 2003 in Richard
Lambert’s review of university-industry collaboration in Britain for HM Treasury,
the finance ministry of the United Kingdom. Amongst its recommendations were world university
rankings, which Lambert said would help the UK to gauge the global standing of its universities. The idea for the rankings was credited in
Ben Wildavsky’s book, The Great Brain Race: How Global Universities are Reshaping the
World, to then-editor of Times Higher Education, John O’Leary. THE chose to partner with educational and
careers advice company Quacquarelli Symonds to supply the data, appointing Martin Ince,
formerly deputy editor and later a contractor to THE, to manage the project. Between 2004 and 2009, QS produced the rankings
in partnership with THE. In 2009, THE announced they would produce
their own rankings, the Times Higher Education World University Rankings, in partnership
with Thomson Reuters. THE cited a weakness in the methodology of
the original rankings, as well as a perceived favoritism in the existing methodology for
science over the humanities, as one of the key reasons for the decision to split with
QS. QS retained the intellectual property in the
Rankings and the methodology used to compile them and continues to produce the rankings,
now called the QS World University Rankings. THE created a new methodology with Thomson
Reuters, published as the Times Higher Education World University Rankings in September 2010. World rankings
Methodology QS publishes the rankings results in key media
around the world, including US News & World Report in the United States and Chosun Ilbo
in Korea. The first rankings produced by QS independently
of THE, and using QS’s consistent and original methodology, were released on September 8,
2010, with the second appearing on September 6, 2011. QS tried to design its rankings to look at
a broad range of university activity. Academic peer review
The most controversial part of the QS World University Rankings is their use of an opinion
survey referred to as the Academic Peer Review. Using a combination of purchased mailing lists
and applications and suggestions, this survey asks active academicians across the world
about the top universities in fields they know about. QS has published the job titles and geographical
distribution of the participants. The 2011 rankings made use of responses from
33,744 people from over 140 nations in its Academic Peer Review, including votes from
the previous two years rolled forward provided there was no more recent information available
from the same individual. Participants can nominate up to 30 universities
but are not able to vote for their own. They tend to nominate a median of about 20,
which means that this survey includes over 500,000 data points. In 2004, when the rankings first appeared,
academic peer review accounted for half of a university’s possible score. In 2005, its share was cut to 40 per cent
because of the introduction of the Recruiter Review. Faculty student ratio
This indicator accounts for 20 per cent of a university’s possible score in the rankings. It is a classic measure used in various ranking
systems as a surrogate for teaching commitment, but QS has admitted that it is less than satisfactory. Citations per faculty
Citations of published research are among the most widely used inputs to national and
global university rankings. The QS World University Rankings used citations
data from Thomson from 2004 to 2007, and since then uses data from Scopus, part of Elsevier. The total number of citations for a five-year
period is divided by the number of academicians in a university to yield the score for this
measure, which accounts for 20 per cent of a university’s possible score in the Rankings. QS has explained that it uses this approach,
rather than the citations per paper preferred for other systems, because it reduces the
effect of biomedical science on the overall picture – bio-medicine has a ferocious “publish
or perish” culture. Instead QS attempts to measure the density
of research-active staff at each institution. But issues still remain about the use of citations
in ranking systems, especially the fact that the arts and humanities generate comparatively
few citations. QS has conceded the presence of some data
collection errors regarding citations per faculty in previous years’ rankings. One interesting issue is the difference between
the Scopus and Thomson Reuters databases. For major world universities, the two systems
capture more or less the same publications and citations. For less mainstream institutions, Scopus has
more non-English language and smaller-circulation journals in its database. But as the papers there are less heavily cited,
this can also mean fewer citations per paper for the universities that publish in them. This area has been criticized for undermining
universities which do not use English as their primary language. Citations and publications in a language different
from English are harder to come across. The English language is the most internationalized
language and therefore the most popular when citing. Recruiter review
This part of the ranking is obtained by a similar method to the Academic Peer Review,
except that it samples recruiters who hire graduates on a global or significant national
scale. The numbers are smaller – 16,875 responses
from over 130 countries in the 2011 Rankings – and are used to produce 10 per cent of
any university’s possible score. This survey was introduced in 2005 in the
belief that employers track graduate quality, making this a barometer of teaching quality,
a famously problematic thing to measure. University standing here is of especial interest
to potential students. International orientation
The final ten per cent of a university’s possible score is derived from measures intended
to capture their internationalism: five percent from their percentage of international students,
and another five percent from their percentage of international staff. This is of interest partly because it shows
whether a university is putting effort into being global, but also because it tells us
whether it is taken seriously enough by students and academics around the world for them to
want to be there. Aggregation
The data are aggregated into columns according to its Z score, an indicator of how far removed
any institution is from the average. Between 2004 and 2007 a different system was
used whereby the top university for any measure was scaled as 100 and the others received
a score reflecting their comparative performance. According to QS, this method was dropped because
it gives too much weight to some exceptional outliers, such as the very high faculty/student
ratio of the California Institute of Technology. In 2006, the last year before the Z score
system was introduced, Caltech was top of the citations per faculty score, receiving
100 on this indicator, because of its highly research and science-oriented approach. The next two institutions on this measure,
Harvard and Stanford, each scored 55. In other words, 45 per cent of the possible
difference between all the world’s universities was between the top university and the next
one on the list, leaving every other university on Earth to fight over the remaining 55 per
cent. Likewise in 2005, Harvard was top university
and MIT was second with 86.9, so that 13 per cent of the total difference between all the
world’s universities was between first and second place. In 2011, the University of Cambridge was top
and the second institution, Harvard, got 99.34. So the Z score system allows the full range
of available difference to be used in a more informative way. Overall rankings
For the rankings before 2010, see the articles about results of the THE-QS World University
Rankings: THE–QS World University Rankings, 2004
THE–QS World University Rankings, 2005 THE–QS World University Rankings, 2006
THE–QS World University Rankings, 2007 THE–QS World University Rankings, 2008
THE–QS World University Rankings, 2009 Rankings by faculty and subject
QS also ranks universities by academic discipline organized into 5 faculties, namely Arts & Humanities,
Engineering & Technology, Life Sciences& Medicine, Natural Sciences and Social Sciences & Management. These annual rankings are drawn up on the
basis of academic opinion, recruiter opinion and citations. QS Top 50 under 50
QS releases a list of QS Top 50 under 50 annually to rank those universities which have established
for not more than 50 years. This league table is based on their position
in the QS World University Rankings of the previous year. Regional rankings
QS Asian University Rankings In 2009, QS launched the QS Asian University
Rankings or QS University Rankings: Asia in partnership with The Chosun Ilbo newspaper
in Korea to rank universities in Asia independently. These rankings use some of the same criteria
as the world rankings, but there are changed weightings and new criteria. One addition is the criterion of incoming
and outgoing exchange students. Accordingly, the QS World University Rankings
and the QS Asian University Rankings released in the same academic year are different from
each other. For example, The University of Hong Kong being
22nd and 23rd worldwide was regarded as the best Asian institution by the QS World University
Rankings, while The Hong Kong University of Science and Technology had topped the tables
of the QS Asian University Rankings simultaneously. QS Latin American University Rankings
The QS Latin American University Rankings or QS University Rankings: Latin America were
launched in 2011. They use academic opinion, employer opinion,
publications per faculty member, citations per paper, academic staff with a PhD, faculty/student
ratio and web visibility as measures. QS BRICS University Rankings
QS collaborates with the Russian News to launch the third regional rankings regarding the
BRICS countries, known as the QS University Rankings: BRICS. This ranking adopts 8 indicators, which are
derived from but different in weightings to those of the world rankings, to select the
top 100 higher learning institutions in these regions. The BRICS ranking only takes mainland China’s
universities into account, excluding other Greater China places’ such as those in Hong
Kong and Taiwan. QS Stars
QS also offers universities a way of seeing their own strengths and weaknesses in depth. Called QS Stars, this service is separate
from the QS World University Rankings. It involves a detailed look at a range of
functions which mark out a modern university. Universities can get from one star to five,
or Five Star Plus for the truly exceptional. QS Stars ratings are derived from scores on
eight criteria. They are: ● Research Quality ● Teaching
Quality ● Graduate Employability ● University Infrastructure ● Internationalisation ● Innovation
and knowledge transfer ● Third mission activity, measuring areas of social and civic engagement
● Special criteria for specific subjects Stars is an evaluation system, not a ranking. About 100 institutions had opted for the Stars
evaluation as of early 2013. In 2012, fees to participate in this program
were $9850 for the initial audit and an annual license fee of $6850. Commentary
Several universities in the UK and the Asia-Pacific region have commented on the rankings positively. Vice-Chancellor of New Zealand’s Massey University,
Professor Judith Kinnear, says that the Times Higher Education-QS ranking is a “wonderful
external acknowledgement of several University attributes, including the quality of its research,
research training, teaching and employability.” She said the rankings are a true measure of
a university’s ability to fly high internationally: “The Times Higher Education ranking provides
a rather more and more sophisticated, robust and well rounded measure of international
and national ranking than either New Zealand’s Performance Based Research Fund measure or
the Shanghai rankings.” In September 2012 the British newspaper The
Independent described the QS World University Rankings as being “widely recognised throughout
higher education as the most trusted international tables”. Martin Ince, chair of the Advisory Board for
the Rankings, points out that their volatility has been reduced since 2007 by the introduction
of the Z-score calculation method and that over time, the quality of QS’s data gathering
has improved to reduce anomalies. In addition, the academic and employer review
are now so big that even modestly ranked universities receive a statistically valid number of votes. QS has published extensive data on who the
respondents are, where they are, and the subjects and industries to which the academicians and
employers respectively belong. General criticisms
Many are concerned with the use or misuse of survey data. Since the split from Times Higher Education,
further concerns about the methodology QS uses for its rankings have been brought up
by several experts. Simon Marginson, professor of higher education
at University of Melbourne and a member of the THE editorial board, in the article “Improving
Latin American universities’ global ranking” for University World News on 10 June 2012,
said: “I will not discuss the QS ranking because the methodology is not sufficiently robust
to provide data valid as social science.” In an article for the New Statesman entitled
“The QS World University Rankings are a load of old baloney”, David Blanchflower, a leading
labour economist, said: “This ranking is complete rubbish and nobody should place any credence
in it. The results are based on an entirely flawed
methodology that underweights the quality of research and overweights fluff… The QS is a flawed index and should be ignored.” In an article titled The Globalisation of
College and University Rankings and appearing in the January/February 2012 issue of Change
magazine, Philip Altbach, professor of higher education at Boston College and also a member
of the THE editorial board, said: “The QS World University Rankings are the most problematical. From the beginning, the QS has relied on reputational
indicators for half of its analysis … it probably accounts for the significant variability
in the QS rankings over the years. In addition, QS queries employers, introducing
even more variability and unreliability into the mix. Whether the QS rankings should be taken seriously
by the higher education community is questionable.” The QS World University Rankings have been
criticised by many for placing too much emphasis on peer review, which receives 40 percent
of the overall score. Some people have expressed concern about the
manner in which the peer review has been carried out. In a report, Peter Wills from the University
of Auckland, New Zealand wrote of the Times Higher Education-QS World University Rankings: But we note also that this survey establishes
its rankings by appealing to university staff, even offering financial enticements to participate. Staff are likely to feel it is in their greatest
interest to rank their own institution more highly than others. This means the results of the survey and any
apparent change in ranking are highly questionable, and that a high ranking has no real intrinsic
value in any case. We are vehemently opposed to the evaluation
of the University according to the outcome of such PR competitions. QS points out that no survey participant,
academic or employer, has been offered a financial incentive to respondents. And academics cannot vote for their own institution. THES-QS introduced several changes in methodology
in 2007 which were aimed at addressing these criticisms, the ranking has continued to attract
criticisms. In an article in the peer-reviewed BMC Medicine
authored by several scientists from the US and Greece, it was pointed out: If properly performed, most scientists would
consider peer review to have very good construct validity; many may even consider it the gold
standard for appraising excellence. However, even peers need some standardized
input data to peer review. The Times simply asks each expert to list
the 30 universities they regard as top institutions of their area without offering input data
on any performance indicators. Research products may occasionally be more
visible to outsiders, but it is unlikely that any expert possesses a global view of the
inner workings of teaching at institutions worldwide. Moreover, the expert selection process of
The Times is entirely unclear. The survey response rate among the selected
experts was only