I’ve been reading the reports of the summer GCSE exams – not the press reports on the issues surrounding GCSE English, but the reports that matter to me, teaching GCSE maths. Thanks to technological innovations, and a certain forward-looking exam board, we get two types. The first is the overall report of the exam itself, with grade boundaries, pass rates, and the chief examiner’s comments on the papers and the individual questions. The second is the local analysis that provides information on the institution, classes, and individuals down to the level of each question. So for example I know that we performed better than the national average, we exceeded the average on almost every topic, and that candidate W got question one wrong. And what strikes me about the reports is not the self congratulation of being better than average, but topics on which nationally so many learners perform badly, and the examiners comments on these issues. And I find it rather disconcerting.

And equally worrying is the fact that the chief examiner appears to be using ‘cut and paste’ over successive years. The following appeared in 2009 and again in 2011 in identical form: ‘All candidates should have a calculator for the calculator paper. Examiners become very concerned when it is clear that candidates are desperately trying to solve complex problems by hand, suggesting a calculator is not available.’ Do we learn nothing, us teachers?

A few years ago coursework was abolished for GCSE mathematics, with the comment from the examiner that we could ‘now concentrate on teaching’. I was worried then that someone within the examination system was dismissing the benefit of investigative learning. So many teachers see our job as one of teaching, training and drilling, with little focus on learning. And things haven’t improved much – a recent comment on a badly answered question demanded that ‘This needs to be taught more rigorously than before.’ And when a question has been answered well, credit is given to the teacher: ‘Many candidates had clearly been drilled into the correct form of words, and for them this was an easy mark.’

But it is clear some topics have not been sufficiently drilled. In June 2011 ‘The difference between perimeter, area and volume remains a mystery to many candidates.’ And asked to draw a rectangle of a given area in 2012, incorrect answers showed ‘…confusion between area and perimeter. A very small minority drew a triangle instead.’ There remains then an issue with simple shapes: ‘Two third of candidates could not write down the mathematical name for the quadrilateral.’ It was actually a trapezium, and a drawing of one is on the inside cover of the paper, with the formula for finding its area. Other technical terms confounded the candidates, many of whom could not differentiate between parallel and perpendicular. And three dimensional shapes fare worse. ‘Not many grasped what the question was asking. It was clear that many candidates struggled to visualise what shape would need to be added to make a cube.’

Shape and space questions may throw up some serious misconceptions, but other topics are equally susceptible to a lack of understanding of the technical terms. In 2012 ‘Many confused median with mean and mode.’ And I rather like the uncertainty about certainty – I’m quite happy to believe the sun will come up tomorrow, but apparently we need to make this more clear to the learners: ‘Candidates should be advised about the practical interpretation of likelihood, e.g. that although nothing can be considered to be truly certain, like the sun rising tomorrow, that for all intents and purposes the probability that the sun will rise tomorrow is as close to certainty (i.e. unity) as makes no difference.’

It seems to me that it is an issue of communication, but I’d welcome discussion. If learners can’t distinguish between words, isn’t it because they have not had sufficient opportunity to use those words in the appropriate context, rather than not having been ‘drilled’? (And it’s so long since I used the word ‘drill’ I’m not sure how to do it, not in a learning environment.) What do I do to be better than average? Sit back and let the learners do the talking – let them make and then name the shapes, let them count the vertices, edges and faces, and report their results. I set investigations involving area and perimeter, which they share, using our special mathematical language. I bring in the learners own personal data, which we share and analyse, and I hope to give as an example when we do it again this year. And we talk about events and likelihood, so that I’m quite confident that my learners appreciate that statistically speaking, the sun will come up tomorrow.

What I don’t do is use classroom time for repetitive examples from books and worksheets, or actually anything that asks the learners to work alone and not communicate with one another. And on what did we do worse than average? We dipped below on a handful of questions on manipulating simple algebraic terms. I’ve hopefully addressed this with some kinaesthetic card matching activities that combine shape and algebra questions, which incidentally was a question on one of our summer papers. It proved difficult for almost everyone. ‘The last and most challenging question on the paper had only had 4% of correct responses with many not attempting the question at all.’ So I’m learning from my mistakes, as hopefully the learners do, and plan to do better next time, which for many learners is not an opportunity that comes their way.

August 26th, 2013 at 2:01 pm

Colin I quite agree re using the exam board’s analysis – we do the same.

I think it’s not just the words like perimeter / area etc but also the terms used in questions. I have always liked the document explaining the terms used, Google ‘maths exam terminology what we say and what we mean and you should find the copy on emaths. I have used some of these as a matching exercise in class before,

August 26th, 2013 at 4:04 pm

Thanks Colleen – useful stuff on emaths, which I shall be directing my new cohort to next week!