Six Universities have been hauled up by the Advertising Standards Authority but it could have been them all. Almost all Universities claim to be above the lowly business of
commerce but still willingly contribute to the petty hit parades in the
University league table season. It demeans the sector. They search for whatever
scraps they can find by selecting data from one ranking table or another. They love to claim they are above the competitive, capitalist, corporate game but they are by far the worst when it comes to the dog-eat-dog, institutional competition that are the rankings. Worst of all, for the people that pay, whether its taxpayers,
parents, national or international students, the University Rankings are largely a con.
1. Bait and switch
The sector loves to take the high moral ground on keeping
managerialism out of education, then use the slimiest form of managerial
marketing, ranking tables, to promote their wares. Aimed firmly at parents and
students, they bait and switch. The hook is baited with data on research and
facilities, then the message switched to make it look like the teaching experience
you’ll pay for, when in fact, the rankings are about measures that have little to
do with teaching. That is a classic 'bait and switch' con.
2. Teaching ignored
They may SAY they take teaching into account but they don’t.
They often claim to have ‘measures’
on teaching, but actually draw their data from proxies, such as staff qualifications and
research activity and use nothing but indirect measures to measure teaching. The Times rankings are a case in
point. They claim that their ranking scores include teaching. In fact, only 30%
is based on teaching but they use NO direct metrics. The proxies include
student/staff ratios (which is skewed by how much research is done) and, even
more absurdly, the ratio of PhDs to BAs. It is therefore, a self-fulfilling
table, where the elite Universities are bound to rise to the top. There is
little direct measurement of face-to face time, lecture attendance or student
satisfaction. In some cases it’s laughable, as Malcolm Gladwell pointed out,
with Faculty salary, levels of degree in Faculty and proportion of faculty who
are full time, being taken as proxies for quality of teaching. It’s like having
a Premier League table based on the performance of the backroom staff and not
the real games and players.
3. False precision
Up one place in the rankings – yippee! Down two places –
time to worry. Yet the idea that these rankings are in any way precise is
silly. They’re a mish-mash of misleading data, under vague (even misleading)
categories and often watered with a heavy dose of opinion (expert panels drawn from top Universities). In any case, they’re
always changing the criteria for ranking, so year-on-year comparisons are useless. This shows itself in the huge disparities between the different ranking
systems. The LSE is 3rd in The Sunday Times rankings but 328th in
the US News and World Report Rankings, 71st in the QS Rankings
and 34th in the THE Rankings).
Other universities like Manchester
and KCL
do badly in British rankings but well in international tables. This gives ample
room for cherry picking but is poof enough that the way the rankings are
calculated is seriously flawed. If the rankings were research they'd be rejected by even the lowliest of Journals.
4. Apples and oranges
They don’t compare like with like. In Edinburgh, where I
come from, we have four Universities; Napier, Heriot-Watt, Edinburgh and Queen
Margaret. You couldn’t get four more diverse institutions in terms of what they
teach and their history. In 2012 Edinburgh were in top five for research but
came stone-cold last in the teaching survey. That same year, Heriot Watt came
top in Scotland and 4th in UK on Student experience but way, way
down in the rankings. In that same year, more than a third of the Russell Group
Universities found themselves in the bottom 40 of 125 institutions (2012) on
teaching. These comparisons are truly odious.
5. Skews spending
What is sad, even morally wrong, is they they really do
influence strategy and spending. Ranking status is often stated explicitly in
their goals. In effect, as teaching doesn’t really get measured, except through
false proxies, it leads to spending on everything but good teaching – physical
facilities, research and so on. This direct causal effect on behaviour also
leads to overspending, as it’s a runaway train, where everyone tries to outdo
everyone else. There is no incentive to save money and become more efficient,
only to spend more. Weirdly, there’s rarely any accounting for students costs
in calculating the rankings. Shouldn’t a University that costs a lot less get
ranked higher than one that does not? It would appear that prejudice trumps
economics. This is a topsy-turvy world, where being more expensive is an
intrinsic good.
6. Gaming the system
It’s not just spending that’s skewed by rankings, they also skew
behaviour and priorities. Universities are far from being free from the rat
race, they just have some very smart rats. In practice, this means that they
are good at gaming the system. What are the criteria and weightings for ranking?
OK, those are this year’s targets. More facilities, let’s get them built.
7. Self-fulfilling
prophecy
The more you spend, the higher your ranking. So the rich get
richer, the poor get poorer. The separation, in terms of research grants
between the handful at the top and the rest is huge. Naturally, this leads to a
separation of the so-called cream from the so-called milk. In that sense it’s a
deterministic system, where the top remain at the top and the rest scrabble
around for the scraps.
8. Agendas
What’s more, the different tables often have uncomfortable
relationships with newspapers. And let’s not imagine that, given the nature of
newspaper ownership in this country, they don’t have agendas. The Complete
University Guide has had relationships with The Telegraph, Times and
Independent. They keep falling out. The Sunday Times has its Good University
Guide. The Guardian has yet another. These tables sell newspapers to middle class parents, that’s the
real driver.
9. Old boys club
Reputation scores feature in lots of the rankings. You go
out and ask people what they think; academics, publishers, employers etc. Of
course, given that most of the people asked are from the highly ranked
Universities, there’s an obvious skew in
the data. That's shameful, qualitative nonsense.
10. Status anxiety
What is their real effect on parents and students? Nothing
but an irrational race. They induce ridiculous amounts of status anxiety.
Parents and kids are being encouraged to play a game which is already gamed and
get stressed over data that encourages distasteful behaviour.
Conclusion
I haven’t even begun to tackle the issue of cheating, being
economical with the truth or fiddling around with the submissions. There are
examples of straight up cheating, and as there’s no real quality control, it’s likely
to be far more common than reported. In truth, no one really knows what the ideal criteria for ranking should be, as it’s a set of competing ideological choices
– accessibility, teaching, research, graduation rates? And with what
weightings? That’s why the different rankings have these huge disparities. We
need, like Reed University in the US, to refuse to hand in the assessments. If
the game is being gamed, don’t play the game.
No comments:
Post a Comment