The results are in, and the UK is seeing the first increase in top A-level grades in six years. At the same time, 13 subjects have seen a drop in the highest grades possible, and more university places are sitting empty than in previous years.
It’s a mixed bag for the first cohort of students to experience what UCAS called “an unprecedented level of qualification reform”, after former education secretary Michael Gove’s changes to the A-level system were confirmed.
So what has changed? In a nutshell, AS-levels no longer contribute to a final A-level grade (as a result, 13 subjects are graded solely on a final end of year exams with no coursework); no other course has more than 20 per cent coursework; and January exams were scrapped, meaning anyone that doesn’t attain their desired grade must wait a year to resit.
But why were the changes made in the first place, and were those decisions based on methodical, impartial evidence?
Many are familiar with Michael Gove’s personal adages that exams have been getting easier, and there was a need to return to the supposed golden age of 1950s learning. The changes, he said, would “address the pernicious damage caused by grade inflation and dumbing down, which have undermined students’ achievements for far too long”.
The nine-year-old who set up a school in his garden and transformed his community
It’s true that there had been a steady increase in the number of people achieving top grades for a period of two decades. This occurred after the UK moved from a quota system, whereby ten per cent were awarded an A grade, for instance, to a criteria referencing system largely driven by examiners’ judgments. At the same time, the number of people taking A-levels has steadily increased, so there is no definitive evidence that exams have been getting “easier”.
A non-ministerial branch of government, The Office of Qualifications and Examinations Regulation (Ofqual), was tasked with investigating the need for reform, and in 2012 its consultation found that A-levels were, in fact, “fit for purpose” – the entire point of the consultation – but that a few improvements could be made including a reduction in re-sits and modularisation, and improved problem solving and analysis skills, among other things. The government also points to research carried out by Durham University and Cambridge Assessment that suggests “repeated opportunities for students to resit exams have risked a form of grade inflation”.
The Ofqual report is probably one of the main pieces of research the government based its decisions on, and it was compiled from 71 face-to-face interviews with higher education staff, ten discussion groups with A-level teachers and 25 phone interviews with employers. We say probably, because there is no categorical list of evidence provided by the government on the matter, nor was a spokesperson willing to provide it.
In 2014, the University of Bristol surveyed one piece of government evidence published by then schools minister David Laws. The government claimed that GCSEs were just as good a predictor of university degree performance as AS-levels. Bristol’s School of Geographical Sciences complained the report was a compilation of “missing data, sample bias and poor research design”. As a result, it concluded that 18.5 per cent of students that did better at AS-levels than GCSEs may not have been offered a university place, based solely on their GCSE performance.
The team argued that “policy based on such evidence seems backward rather than forward-looking, whatever the ‘intellectual’ arguments for abolishing AS-levels and returning to the pre-1990s A-level model”. The Department of Education’s response at the time: “These [A-level] reforms are based on a range of careful analysis and evidence based research.” When I requested that “evidence-based research”, the DoE directed me to a briefing paper published in 2017 that outlines the reforms and provides anecdotal evidence for them from 2013 parliamentary debates. Even the Ofqual report, though thorough, is largely anecdotal, rather than comparing historical data on results and the changes made that led to them.
If anecdotal evidence, provided by experts with years of experience in the field, was the measure of “evidence based research” for the government, it’s interesting it ignored very public appeals at the time by some of the best universities in the country, imploring it not to implement some of the reforms.
In one parliamentary debate the DoE directed me towards, at least two MPs complained that a government consultation into A-level reform found 77 per cent of respondents were against the proposals. That included the 24 Russell Group universities, college and school lecturers, teachers and head teachers. These respondents warned “about not only the proposals but the time scale”. The Russell Group, which comprises the UK’s leading research universities, argued the reforms would make it “harder to identify bright pupils from working-class homes”, a point repeatedly raised since. “We are worried that, if AS-level disappears, we will lose many of the gains in terms of fair admissions and widening participation that we have made in the last decade,” Geoff Parks, director of admissions at Cambridge University said at the time.
Sir Leszek Borysiewicz, vice-chancellor of Cambridge University at the time, went so far as to write to the Welsh Education Minister, saying: “Your intention to retain AS examinations at the end of year 12 in Wales will put strong Welsh applicants in a good position. Year 12 exams have been shown to be a good predictor of Cambridge academic success and are taken very seriously by our selectors. We are convinced that a large part of this success derives from the confidence engendered in students from ‘non-traditional’ backgrounds when they achieve high examination grades at the end of year 12.”
Now that AS-levels are not contributing to final results in England, entries for the exam have fallen by more than 40 per cent this year, demonstrating just how vast that difference may be. The AS-levels were introduced to encourage students to experiment with a broader range of subjects, and consequently many would drop the subjects they scored lower in, contributing to an increase in higher grades. Now, none of those benefits are available to students.
There is no doubt the government decided on its reforms based on a great deal of input from relevant parties. Cambridge Assessment, for example, recommended closer ties between universities and those drawing up exams, and one of the main designs of the resulting reforms was to arguably better-prepare students for university level education. Today, however, two per cent fewer university places have been taken up than in previous years.
It’s impossible to know how these reforms to exams will play out over time. The core idea behind the changes has been to allow both students and teachers more time to learn and to teach, rather than focus on the business of exam-taking. But the dangers of a two-tier system seem all the more pertinent after the published 40 per cent drop in AS-level uptake.
Seema Malhotra, MP for Feltham and Heston, echoed the fears of many working in education in 2013, when she suggested that opinion and preference, the “intellectual arguments” Bristol’s geographers were concerned about, was, in fact, being favoured over evidence, when it came to reform decisions. And without definitive evidence to point to today, it’s hard to contest her point.