This feature of the blog is a continuous exchange of correspondence between Graham Birrell and Sam Freedman. For all other articles in the series, see the Shifted page.
As I suspected you do see the fairly obvious benefits of publishing and collecting data. Your real concern – shared by plenty of others – is how the data is used by Government for the purposes of accountability. And I understand why. In recent years some of the metrics used have created clear perverse incentives that have been damaging. The over-valuation of GCSE “equivalents” in performance tables led to a surge in uptake of courses that were less valuable for students. The problems with GCSE English last year exposed more than ever the intense pressure on the “C/D” borderline.
But the right question isn’t “does accountability have any negative effects” but “do the positive effects of accountability outweigh the negative ones”? I think they do – even given the particular weakness of some of our accountability measures in England. And this is supported by Professor Simon Burgess’s analysis showing the abolition of performance tables in Wales had a detrimental impact; as well as Iftikhar Hussein’s study showing the positive benefits of receiving a “fail” from Ofsted (whose ratings are heavily data driven). At a global level the OECD have been clear they think autonomous school systems with little or no accountability are the least successful.
So I would be interested to hear whether you think schools need to be held accountable in any way for their performance? And if you think they do how you’d propose to do it without using data? Should a school where, say, 95% of pupils are leaving without a C grade in English and Maths really be allowed to continue indefinitely? Or where behaviour is completely out of control?
I think, rather than scrapping accountability, it’s more promising to look at how the negative effects of high-stakes testing can best be mitigated. For instance the Government’s proposed changes to the secondary accountability system will be a big improvement as they would see schools rewarded for improving outcomes for all students not just those at risk of a D grade. I would also like to see a wider range of metrics used in judging a school’s overall performance including the destinations of pupils.
I would also add that it is important to retain flexibility when using data to take sanctions against underperforming schools. There may be good reasons why a school has seen a short term blip in exam performance; or an alert set of Governors may have already taken action to deal with weak leadership.
But if a school has been seriously underperforming for some time in comparison with similar schools then it is surely in the interests of the young people in that school to change its governance and leadership (which is – essentially – what becoming a sponsored academy means)?
I agree with you that Steve Machin’s research on academies is interesting though it is worth noting that “higher ability” students in the context of these schools would not have been high-performers by most standards. I expect the focus on the C/D borderline led to less focus on the poorest performing students. But he stills shows the early sponsored academies outperformed other schools and cites growing evidence from the US that turnaround models can have significant impact (Stanford CREDO studies of New Orleans; New Jersey; and Boston support this).
In any case I certainly don’t see the use of accountability data to regulate schools as “Darwinian”. A free market approach would be to let poor schools go bankrupt as they were abandoned by parents, not intervene to improve them. Actually, as most new sponsored academies are run by other schools, the policy is now really about building collaborative networks. As these networks continue to grow they should allow the expertise of our most successful educationalists to filter into some of our least successful schools. This is, surely, the opposite of Darwinian?