Skip to content

Testing Times: What Can We Learn About Data from the UK’s Great Exam Omnishambles

Testing Times: What Can We Learn About Data from the UK’s Great Exam Omnishambles

In the UK, lockdown prevented end of year exams – and so the governments across the UK decided to give out grades based on an algorithm. The result? To cut a long story short: computer said no. 

The extent of the issue first became apparent when the Scottish Qualification Authority (SQA) released their results. The ‘normalisation’ algorithm, which took into account factors like teachers’ predictions, mock exam results and schools’ previous performances. 124,000 students had their predicted results downgraded – and this impacted students in more deprived areas more profoundly, particularly hardworking and talented outliers who had been predicted to excel despite difficult circumstances. When A-level results came out across the rest of the UK a week or so later, the same pattern was observed and the media was flooded with tales of talented youngsters losing prestigious scholarships, industry training places and university places. Both the Scottish and UK government have, by now, agreed to walk back on the algorithmically-decided grades and to award grades estimated by teachers. As a result universities are oversubscribed and youngsters and parents are still trying to figure out the next steps.

It’s a sorry state of affairs – but an instructive one too. It shows how data, poorly used, can entrench and reinforce systemic bias. It shows how ‘the power of prediction’ can poorly serve exceptional outliers. And how blind faith in ‘an algorithm’ reveals a lack of sophistication. Most pointedly, it reminds us that behind every data point is a human being, struggling through a crisis as well as they can.

Nic Pietersma, Director of Analytics, Ebiquity said: 

Algorithms are getting a lot of bad PR at the moment, but an algorithm is just a set of instructions or a mathematical routine that needs to be followed. Algorithms aren’t intrinsically good or bad – they should be judged by their usefulness.

In this case, Ofqual seems to have misjudged the legal and political ramifications of downgrading results to the extent that they have. Accepting teacher assessments may have been the lesser of two evils, but no doubt would also have repercussions elsewhere in the university selection process.

In programmatic marketing we often trust algorithms too much, without anyone in the room having a full end-to-end understanding of what they do with our investment. Our advice to clients is to have some form of validation to regularly ‘kick the tyres’ on the algorithm – we recommend transparent test and control methods.”

 

To read the article in full on LBBonline, click here. 

First featured 20/08/2020.

Ebiquity Insights

Equip yourself with the data, benchmarks, and strategic insights needed to navigate the evolving advertising landscape.

research
Beyond ROI: What Really Drives Effectiveness on TikTok 
New research from TikTok, powered by independent econometric analysis from Ebiquity
Why Streaming TV’s ROI challenge is an opportunity (if you get governance right) 
January 27, 2026
Four ways to make Marketing Mix Modelling work for your business 
January 15, 2026
Evolving the media operating model: Insights from Mars’ Transformation
January 13, 2026
guides
Scaling with Safeguards: How to grow your Streaming TV investment effectively
How the Omnicom–IPG Merger Impacts Marketers
December 22, 2025
guides
2026 Media Predictions: What matters most for the year ahead
The art of great creative: Celebrating 2025’s standout adverts 
December 17, 2025