In the summer of 2025, students answered a GCSE Maths question the way they had been taught, the way OCR's own mark schemes and examiner reports had always said to. They were marked wrong. Approximately 1,000 of them missed a grade boundary because of it.
When we raised it, OCR said there was no problem. When we showed them the evidence, they dug in. So we took it to Ofqual, the independent regulator for exams. Ofqual investigated and agreed with us. The complaint was upheld in full. And yet, even now, OCR's response has fallen far short of what those students deserve.
I've written this post for every maths teacher, and in particular for any OCR GCSE Maths teacher or student who sat their exam last summer, and for anyone who cares about how young people are treated by the institutions that are supposed to serve them. You need to know what happened. And I'd like your help in pushing OCR to do what they should have done from the start: tell every school clearly what changed, and remark the question. If you agree, please sign the petition.
Sign the petitionWhat happened?
In June 2025 OCR set this question in their Foundation and Higher tier papers (Paper 3 and Paper 6):
There is nothing wrong with the question itself; it is not hard. It's Question 6, towards the beginning of the paper, where students expect straightforward material before the difficulty ramps up. Teachers and students had seen similar questions from OCR and other exam boards. If they had learnt the material and revised, they knew how to answer it.
But the problem was the mark scheme. OCR penalised students for using a method their own examiner reports and mark schemes had previously described as correct. Students would have left that question believing they had answered it well, and that, in any previous year, they would have scored full marks.
The inconsistency
To understand what went wrong, let me show you how OCR previously approached these types of questions.
In November 2021, Paper 6, OCR set this question:
You can see the similarity. Students were asked to show that an object (a set of cupboards) "may not" fit into a space. The mark scheme required students to use the upper bound of each cupboard (60.5 cm) and show that 6 × UB = 363 cm is bigger than the lower bound of the wall (362.5 cm).
The examiner's report was explicit:
It's worth noting that each of OCR's Examiner's Reports opens with the same statement: "Our examiners' reports are produced to offer constructive feedback on candidates' performance in the examinations. They provide useful guidance for future candidates."
Like most teachers, our department used old mark schemes and examiner reports to prepare our students. For many topics, maths is maths: you teach the standard method and that's that. But some topics require you to be tuned in to what examiners specifically want, otherwise you risk holding students back.
Bounds is exactly that kind of topic, and here's why it's tricky:
- If I say I am "180 cm to the nearest 10 cm," I could be as short as 175 cm. But thinking about the tallest possible height gets tricky: I could be 184.9 cm, or 184.99 cm, or 184.999 cm, and so on.
- Eventually we find ourselves saying I could be 184.999...cm, which is effectively 185 cm.
- So we say "the upper bound is 185 cm."
- But 185 cm doesn't round to 180 cm. It rounds to 190 cm.
- The upper bound isn't a value I could actually be and still round to 180 cm. It's a mathematical convention, not a real possibility.
- But we use it nonetheless, and get around this by calling it the "upper bound."
It's genuinely confusing, which is exactly why, when it comes to bounds, teachers don't guess. We follow what the examiners say, as per OCR's own advice at the front of their reports.
And in November 2021, OCR's examiners were unambiguous. They said "six cupboards could be 6 × 60.5", use the upper bound, even though it isn't a value that rounds down to the original measurement.
So that's what we have taught: use the upper bound when doing objects-fitting-into-a-space problems.
I found this video of a great maths teacher explaining the problem. He uses the same method, and it's worth watching to understand the issue more fully.
Watch: Bounds explained — YouTubeIt wasn't just November 2021. In one of OCR's practice papers, the same approach was required. Here's the question:
And here's the mark scheme:
It's not just mark schemes that have expected students to use the upper bound. Some questions simply cannot be answered without it. Here's one:
The only way to show that the smallest possible height of the box is 6 cm is to use the upper bounds of the lengths and widths. There is no other way to get to the answer.
I've included a number of similar questions at the end of this post, and the pattern is consistent: OCR have repeatedly set questions where using the upper bound is not optional: it's required.
So it is entirely reasonable for students and teachers, preparing for the GCSE using OCR's own materials, to conclude that although the upper bound isn't technically a possible value, you use it, because OCR have told you to.
Back to the question last summer
I hope you can see now that OCR have consistently treated the upper bound as a usable value and have actively wanted students to use it. But in June 2025, without any notice, that changed.
For the fridge question, OCR were no longer willing to award marks for using the upper bound. Instead, they wanted students to use a value just shy of it. Perhaps the thinking was that this represented more "proper" maths, but that argument doesn't hold up. OCR themselves have set questions that make proper maths impossible, where using the upper bound isn't a workaround but the only path to the answer. The cuboid question is a clear example.
And so when teachers in my department sat down with student scripts in September 2025, we were baffled. Work that would have been correct in any previous year simply wasn't anymore, and nobody had told us, or our students, that the rules had changed.
Shifting the blame
After reviewing student scripts in September 2025, we contacted OCR's Subject Advisor to query the June 2025 mark scheme and its inconsistency with previous papers.
OCR replied, and their position was straightforward: there was no inconsistency, just two different questions with different demands. They did concede that the practice paper mark scheme would need updating, and hoped that resolved our concerns.
It didn't. We wrote back setting out the evidence in detail: the 2021 cupboards question, the cuboid question, the practice paper, and five other questions posted at the end of this piece. We asked to escalate to OCR's formal complaints department.
OCR's complaints team responded in October 2025. And this is where things got interesting.
Somewhere between our initial query and this response, OCR had apparently discovered that bounds questions came in two distinct types. Not in the specification (it isn't there). Not in any examiner guidance (it isn't there either). Not in any subject update, training material, or published communication to schools. None of those either. This categorisation had never been mentioned to teachers, never used to explain a mark scheme, and never appeared in print anywhere. And yet here it was, in a letter to a complainant, presented as if it had always existed:
- Type 1Questions where the final answer is a bound, typically a minimum, and where using the upper bound is acceptable.
- Type 2Questions where the student must interpret a pair of values and make a decision, and where the upper bound should not be used.
The fridge question was Type 2. The cupboards question was Type 1. Inconsistency resolved, at least as far as OCR were concerned.
Neither we nor Ofqual saw it that way.
Ofqual's verdict
Ofqual issued their final decision on 19 February 2026. They upheld our complaint in full.
Let's be clear about what that means. This is the regulator, the body that exists to hold exam boards to account, telling OCR, in writing, that they got it wrong. Not wrong in some minor, technical, procedural sense. Wrong in ways that go to the heart of what an exam board is supposed to do.
On the Type 1/Type 2 framework, the cornerstone of OCR's entire defence, Ofqual was withering. They found the distinction "arbitrary." They noted it was "not clearly described or supported within the specification or any of the associated guidance materials." And then came the killer line: it was, they said, "difficult to see how centres or candidates could reasonably be expected to understand or apply such a distinction when preparing for the assessment." In other words: OCR invented a framework, told nobody about it, and then marked students down for not following it.
On the June 2025 mark scheme itself, Ofqual found that it "does not align with the approach taken in similar questions in previous years", precisely the inconsistency we had raised from the start, and that OCR had spent months insisting did not exist.
And it didn't stop there. Ofqual identified potential breaches of three of their own Conditions of Recognition:
- Condition G3.2: assessment materials must be clear and unambiguous. OCR's weren't.
- Condition G1.3: awarding organisations must provide clear criteria for differentiating attainment. OCR's inconsistencies meant they hadn't.
- Condition D1.2: qualifications must be fit for purpose, securing validity, reliability, and fairness. The failures identified meant they may not have been.
OCR had responded to Ofqual's provisional findings by simply restating their disagreement. No new evidence. No new argument. Just the same position, repeated. Ofqual noted this pointedly in their final letter, and it made no difference to their decision.
The matter was referred within Ofqual for consideration of whether further regulatory action was required.
OCR had told us, in September 2025, that there was nothing wrong. Five months later, the regulator had found potential breaches of three separate rules, called their central argument arbitrary, and referred the case for possible further action.
What have OCR done since?
You might expect that after Ofqual upheld our complaint, found three potential regulatory breaches, and called OCR's central argument arbitrary, OCR would do the decent thing. You would be wrong.
Their response, dated 5 March 2026, didn't acknowledge the students who had been let down. Gone was the Type 1/Type 2 framework. Ofqual had demolished that. In its place came a new argument: it wasn't 2025 that was wrong. It was 2021. The November 2021 mark scheme, OCR now claimed, should never have presented 363 cm as a possible value; it should have made clear that the upper bound was merely being "condoned" as an approximation, not endorsed as correct. The June 2025 mark scheme was "mathematically correct." No remark required. Case closed.
OCR had updated the November 2021 mark scheme and the practice paper to reflect this revised position. But they were doing so silently: no email to schools, no subject update, no announcement of any kind. When we discovered this, we pushed back hard in a conference call. We couldn't be the only school that knew. If OCR were quietly correcting published materials that every maths teacher in the country had used to prepare their students, those teachers deserved to know. Keeping that information to ourselves would advantage us over every other school, and that wasn't acceptable. OCR's response to that pressure was a brief mention in their April newsletter. After a six-month complaint that went all the way to the regulator, that was their idea of sufficient communication.
And then there is the question of the students. Approximately 1,000 of them sit one mark below the grade boundary they should have crossed. They revised using OCR's own materials. They used a method OCR's own examiner report had held up as a model response. They had no way of knowing the rules had changed, because OCR told nobody.
We have pointed to AQA, who last October issued a remark for A level Physics in comparable circumstances. AQA did the right thing. OCR have not. They have refused to remark the scripts from last summer, and those students are still waiting, and most of them don't even know it.
What makes this harder to accept is that it didn't have to come to this. Had OCR acknowledged the inconsistency last September, they could have acted then. Better still, they should have spotted the problem during marking in July and August, before any results were issued. Instead, it took months of pushing, and ultimately the intervention of the regulator, before they would acknowledge what had gone wrong.
They have one more thing to do before thousands of students sit their GCSEs this summer. Remark the question from last summer, and put it right for the students who were one mark away from the grade they deserved. Bizarrely, there may even be students resitting their GCSE this year who, had they received the mark they were owed, wouldn't need to. OCR must put it right.
If you agree that OCR should remark the question and do right by these students, please sign the petition.
Sign the petition