Last week, a report by the Education Policy Institute (EPI) investigating the impact of England’s phonics screening check (PSC) was released. The report interrogated a number of data sources but failed to find “a discernible positive impact of the PSC on the reading levels of primary aged children in England.”
This conclusion seems at odds with what I understand of the PSC. I argued in a previous blog that the PSC has “driven dramatic improvements in pupils’ phonics knowledge”. Published research has linked these improvements to a “steady rise in performance in KS1”. Other research has talked about the importance of the PSC in guiding interventions for struggling readers and in refining classroom practice. The PSC has apparently been so successful that other countries are now doing it. So, how should we understand these different perspectives?
In this blog, I’m going to argue that the EPI report hasn’t considered the right data, hasn’t formed well-justified hypotheses, and has made strong conclusions that are not supported by the data.
What was the original purpose of the PSC?
The original purpose of the PSC was:
· to confirm that pupils have learned phonic decoding to an appropriate standard
· to identify children who require extra support to reach the required standard
It was also “hoped” that the impact would be felt more widely in (a) encouraging schools to implement a rigorous phonics programme; and (b) increasing the number of pupils reading competently at the end of KS1 and KS2 (p.5).
I recount these original aims and aspirations because that's how we should be assessing whether the PSC has done its job. Yet, that’s not what the EPI report has done: they’ve considered a wide range of data including whether some groups of pupils pass the PSC at higher rates than others, KS1 and KS2 reading and writing scores, PIRLS scores, and teacher surveys. The report does not even address whether the PSC has fulfilled its original purpose.
So, has the PSC fulfilled its original purpose?
It is manifestly clear that the PSC has fulfilled its original purpose of assessing pupils’ decoding skills and identifying children in need of extra support. The dark blue line of the figure below shows the percentage of pupils meeting the PSC pass mark at the end of Year 1. Pupils who do not meet the pass mark receive additional support and retake the PSC at the end of Year 2; the light blue line shows scores for those pupils. Overall, the PSC has provided schools with a straightforward means of assessing pupils’ decoding ability in manner shown to be valid.
Figure 1. Percentage of pupils who passed the PSC at the end of Year 1 (dark blue line) and at the end of Year 2 for pupils who did not pass the PSC when administered in Year 1 (light blue line).
How about the wider “hoped for” impacts?
It also seems evident that the PSC has delivered the first of the “hoped for” impacts: to encourage schools to adopt more rigorous phonics teaching. It’s important to remember that the PSC was introduced several years after phonics had become mandatory in England’s classrooms following the Rose Review (2006). Despite this, only 58% of pupils in the initial cohort of the PSC reached the pass mark. This result suggests that the phonics teaching being provided was not as rigorous as it might have been. The dramatic rise in the percentage of pupils reaching the pass mark over the next four years (from 58% to 81%) suggests that the PSC helped schools to improve their phonics teaching.
I’ll now turn to the second of the “hoped for” impacts: an increase in the number of pupils reading competently by the end of KS1 and KS2. It’s important think about how this impact might come about. There is no reason to expect that the introduction of the PSC itself should lead to improvements in reading comprehension. Instead, we might anticipate that the PSC would lead to improvements in phonics teaching, and that these improvements would lead to gains in phonics knowledge and subsequently, reading comprehension. It's also important to remember that KS1 and KS2 reading comprehension tests reflect many aspects of reading that go beyond phonics knowledge: for example, decoding fluency, vocabulary, syntactic knowledge, genre knowledge, world knowledge. For both of these reasons, we might anticipate that any rises in KS1 and KS2 reading scores should be gradual, and certainly less dramatic than observed for PSC performance itself. Instead, the EPI report hypothesizes that there should be a “discontinuity” in scores at the point at which the PSC was introduced. I don’t think this hypothesis makes sense.
This brings us to another problem: we have no way of knowing whether any improvements in KS1 and KS2 reading outcomes are due to introduction of the PSC. To draw conclusions of this nature, we would need a comparison group in which the PSC had not been introduced (or had been introduced at a different time). It turns out that there are positive, gradual gains in reading comprehension at both KS1 and KS2 in the years following introduction of the PSC. However, the EPI report dismisses these because there were also gains prior to the PSC. The EPI report argues that these prior gains just continued and levelled off (i.e. no “discontinuity”). One might argue that all of these gains were due to the introduction of phonics and the increase in rigour driven by the PSC, or one might argue that they were caused by something else, but the fact is that we don't know and we can't find out with the available data.
Final words …
The EPI report is at pains to stress that it is not possible to draw any causal inferences about the effect of the PSC on the distal reading comprehension outcomes that they evaluate. We don’t have any way to compare KS1 and KS2 reading outcomes following introduction of the PSC to the outcomes that would have arisen had the PSC not been introduced. Yet, the press release issued with the report offers the strong conclusion that there is no “discernible positive impact of the PSC on the reading levels of primary aged children in England”. That’s not a responsible claim given the weaknesses in the research design: if a research design does not allow you to establish a positive impact, then the absence of one shouldn't be pitched as big news.
My take is that the PSC has fulfilled its original purpose of measuring pupils’ phonics knowledge and identifying those in need of further support. I also think there are good reasons to believe that the PSC has helped schools to strengthen their phonics teaching, and that this has led to rises in pupils’ phonics knowledge. Finally, we have seen incremental rises in KS1 and KS2 reading comprehension outcomes, although the available evidence does not allow us to determine whether these were caused by an enhanced emphasis on phonics delivered via the PSC.
It's a policy decision as to whether the PSC continues to be valuable. I have no problem with the EPI report’s recommendation that it should be reviewed (as for any national assessment). However, the EPI report hasn’t considered the right data to evaluate the PSC, they haven’t formed well-justified hypotheses, and their strong conclusions do not follow from their data. Therefore, I seriously doubt the EPI report’s value as a source of evidence in any future review.
Note. I'm grateful to Max Coltheart and to members of the Rastle Lab for their comments on this blog.
Comments