Evidence-informed practice needs practice-based enquiry

Print Friendly, PDF & Email

Ben Goldacre’s recent call for more randomised controlled trials (RCTs) in education has renewed interest in evidence-based or -informed practice. Large-scale syntheses of existing studies, such as John Hattie’s, have also become popular reading. While such evidence is thought to tell us ‘what works best’, it does not always reveal why. It is also not yet clear how this research can inform teachers’ practice in useful, empowering ways. In this article Phil Taylor, Senior Lecturer and Course Director for the Masters in Teaching and Learning at Birmingham City University, suggests a solution is offered by practice-based enquiry.

Sign indicating research and practice needsto move both waysThe case for randomised controlled trials (RCTs) in education is that they provide the greatest rigour in establishing ‘what works best’ so that teachers can adopt similar practices (Goldacre, 2013). This demands experimental designs incorporating sufficiently large samples, randomly chosen and assigned to groups, and the use of statistics to focus on the average effect on the samples as a whole rather than their individual members (Schneider et al, 2007; Hutchison & Styles, 2010; Haynes et al, 2012). Questions have been raised about the practicality, desirability and ethics of controlled experiments in education (for example: Morrison, 2001; Biesta, 2009), some of which are addressed by Goldacre. These are important questions but not the main focus of this article, which is more concerned with the usefulness of research for teachers’ work. Goldacre (2013, p.7) stresses his proposals are not ‘about telling teachers what to do’, but ‘about empowering teachers’. So how can evidence from RCTs and meta-studies (studies of studies) inform teachers’ practice in useful, empowering ways?

Findings from large-scale published research are often expressed quantitatively, lending them authority. For example, Hattie’s (2008) synthesis of over 800 meta-studies quantifies a wide range of influences on pupil achievement as average effect sizes across the available evidence. Most teacher effects are in the region of 0 to 0.4, leading Hattie to conclude that we should be implementing approaches that, on average, exceed 0.4 and bring about more ‘visible learning’. We should not be content with ‘what works’, as that is ‘almost everything’ (very few approaches have a negative effect). Instead we should be concerned with ‘what works best’ (Hattie, 2008, p.18). A similar synthesis of a smaller number of influences on attainment is provided by the Sutton Trust-EEF Toolkit, for recommended expenditure of Pupil Premium funding. Impact is estimated as ‘additional months progress’ linked to effect sizes, though it is acknowledged that these gains may or may not be realised if teachers apply the toolkit. The Sutton Trust-EEF also provides The DIY Evaluation Guide, encouraging a mini-RCT approach in classrooms, though conforming to appropriate experimental designs may not be straightforward. We begin to see that ‘what works’ is principally concerned with average effects on pupil achievement, privileging measures of attainment from tests. But how useful is an array of effect sizes in informing classroom practice? Let’s look at an example.

Inductive teaching seeks to develop understanding of a topic by using specific examples to move towards general principles (the opposite of deduction). A quick reading of Hattie’s work might discourage this approach as inductive teaching reaches an average effect size of only 0.33 (below the significant 0.4) (Hattie, 2008, p.208). This is based on two meta-studies, one from 1983 in a science context with an effect size of 0.06, the other from 2008 across ‘all subjects’ (unspecified) reaching 0.59. To get the overall effect size of 0.33 Hattie simply averages these two effect sizes (0.06 and 0.59). We are told little else about these studies to help us understand why one, 25 years later, is so much more positive than the other. Hattie later accords inductive teaching an effect size of only 0.06 (omitting the more recent study), bolstering his criticism of ‘teacher as facilitator’ and preference for approaches associated with ‘teacher as activator’ (Hattie, 2008, p.243). This may be a genuine mistake but it is nevertheless misleading. What can we conclude about inductive teaching? The later study might encourage teachers to try it, but (mistakes aside) the reduction to a number of the average learning effect on several thousand pupils offers little practical help. We need to know more about the qualitative circumstances that led to success or otherwise in inductive teaching for different pupils. Hattie acknowledges that his work is focused on effects, not ‘details and nuances’ (Hattie, 2008, p.viii). Goldacre (2013) also recognises that RCTs might tell us what works but not necessarily why, pointing to the need for other forms of research too.

If the detail of different teaching approaches for individual pupils, as opposed to average effects across samples, is the sort of evidence most useful to teachers then who better to gather it than teachers themselves? Teachers are not detached observers but insider participants who know their pupils and contexts well. Through case studies, action research and lesson study they can explore, reflect and refine; a form of research known as practice-based enquiry. The process involves identification of a development focus, gathering of relevant evidence including published research, taking action for change and reflecting critically on impact. If the focus of enquiry concerns pupil attainment in tests then quantitative measures may be appropriate, though simple comparisons often suffice (Gorard et al, 2002). However, if nuances of practice, individual responses and personal perceptions are more relevant then qualitative evidence is needed. This might be gathered, for example, via interviews, diaries, work scrutiny, pupil trails and observation. It may be represented and analysed as quantities, but is built on qualities.

Some raise concerns about this type of teacher research, suggesting it lacks the objectivity and rigour required to generalise, thereby failing to contribute to a wider body of knowledge about teaching. This misses the point and purpose of practice-based enquiry, which does not set out to generalise beyond the specific context where it is most useful, although others may see relevance to their own situations. Appropriate rigour comes from critical awareness of strengths and limitations in enquiry methods and findings, rooted in practice experience. This imposes on teachers neither a research formula nor the full demands of researchers. Crucially, practice-based enquiry is a form of situated learning that draws upon and re-contextualises published research findings, including outputs of RCTs or meta-studies where relevant. Therefore small-scale practitioner enquiry complements wider educational research, making it useable. It can also feed larger studies by providing researchers with agendas of concern to practitioners as well as preliminary evidence. Research becomes ‘the servant of professional judgement, not its master’ (Pring, 2000, p.141).

Successive surveys have shown teacher use of published research and research activity to be low to moderate only, reflecting a well-documented gap between educational research and school/classroom practice (Opfer et al, 2008; OECD, 2009; Poet et al, 2010). The main reasons offered are that research studies are written primarily for an academic audience and lack relevance and accessibility for teachers. Commonly proposed solutions are closer alignment of research to practice needs, a greater role for teachers in research design and intermediary support in its application. However, the surveys also reveal that informal experimentation, adapting and refining practice, is common among teachers as they strive with their pupils to provide the most effective learning experiences. Formalising this process through practice-based enquiry is a natural step, bridging the research-practice gap through the application of published findings in schools and classrooms (summarised in the diagram below).

bridging the research practice gap

Through practice-based enquiry, generalised knowledge and theories of teaching become re-contextualised in situated knowledge generation and communities of learning. This accepts that ‘what works best’ will not be exactly the same in every situation and we should also ask how and why. Teachers need to point to their own evidence when justifying practice, not just average effects from published studies. The implications for initial and continuing teacher education ought to be clear. If we want to promote an evidence-informed teaching profession then teachers need to be equipped and supported to play a part in defining, generating and using that evidence. Practice-based enquiry achieves this by recognising the importance of context and locally defined development priorities. This has potential to support ‘two-way traffic’ between research and practice and be genuinely empowering for teachers.

, , , , , ,

2 Responses to Evidence-informed practice needs practice-based enquiry

  1. Vanessa Young Monday, 9 September 2013 at 16:04 #

    This is such a welcome critique, especially in the light of the Education Endowment Fund which is giving enormous amounts of money to projects which seem to exclusively employ randomized control trials.

Trackbacks/Pingbacks

  1. Be Research Informed! - Saturday, 27 February 2016

    […] Image Source […]