April.16.2018

Here’s what is going wrong with ‘evidence-based’ policies and practices in schools in Australia

By James Ladwig

An academic‘s job is, quite often, to name what others might not see. Scholars of school reform in particular are used to seeing paradoxes and ironies. The contradictions we come across are a source of intellectual intrigue, theoretical development and at times, humour. But the point of naming them in our work is often a fairly simple attempt to get policy actors and teachers to see what they might not see when they are in the midst of their daily work. After all, one of the advantages of being in ‘the Ivory Tower’ is having the opportunity to see larger, longer-term patterns of human behaviour.

This blog is an attempt to continue this line of endeavour. Here I would like to point out some contradictions in current public rhetoric about the relationship between educational research and schooling – focusing on teaching practices and curriculum for the moment.

The call for ‘evidenced-based’ practice in schools

By now we have all seen repeated calls for policy and practice to be ‘evidence-based’. On the one hand, this is common sense – a call to restrain the well-known tendency of educational reforms to fervently push one fad after another, based mostly on beliefs and normative appeals (that is messages that indicate what one should or should not do in a certain situation). And let’s be honest, these often get tangled in party political debates – between ostensible conservatives and supposed progressives. The reality is that both sides are guilty of pushing reforms with either no serious empirical bases or half-baked re-interpretation of research – and both claiming authority based on that ‘research.’ Of course, not all high quality research is empirical – nor should it all be – but the appeal to evidence as a way of moving beyond stalemate is not without merit. Calling for empirical adjudication or verification does provide a pathway to establish more secure bases for justifying what reforms and practices ought to be implemented.

There are a number of ways in which we already know empirical analysis can now move educational reform further, because we can name very common educational practices for which we have ample evidence that the effects of those practices are not what advocates intended. For example, there is ample evidence that NAPLAN has been implemented in a manner that directly contradicts what some of its advocates intended; but the empirical experience has been that NAPLAN has become far more high-stakes than intended and has carried the consequences of narrowing curriculum, a consequence its early advocates said would not happen. (Never mind that many of us predicted this. That’s another story.) This is an example of where empirical research can serve the vital role of assessing the difference between intended and experienced results.

Good research can turn into zealous advocacy

So on a general level, the case for evidence-based practice has a definite value. But let’s not over-extend this general appeal, because we also have plenty of experience of seeing good research turn into zealous advocacy with dubious intent and consequence. The current over-extensions of the empirical appeal have led paradigmatic warriors to push the authority of their work well beyond its actual capacity to inform educational practice. Here, let me name two forms of this over-extension.

Synthetic reviews

Take the contemporary appeal to summarise studies of specific practices as a means of deciphering which practices offer the most promise in practice. (This is called a ‘synthetic review’. John Hattie’s well-known work would be an example). There are, of course, many ways to conduct synthetic reviews of previous research – but we all know the statistical appeal of meta-analyses, based on one form or another of aggregating effect sizes reported in research, has come to dominate the minds of many Australian educators (without a lot of reflection on the strengths and weaknesses of different forms of reviews).

So if we take the stock standard effect size compilation exercise as authoritative, let us also note the obvious constraints implied in that exercise. First, to do that work, all included previous studies have to have measured an outcome that is seen to be the same outcome. This implies that outcome is a) actually valuable and b) sufficiently consistent to be consistently measured. Since most research that fits this bill has already bought the ideology behind standardised measures of educational achievement, that’s its strongest footing. And it is good for that. These forms of analysis are also often not only about teaching, since the practices summarised often are much more than just teaching, but include pre-packaged curriculum as well (e.g. direct instruction research assumes previously set, given curriculum is being implemented).

Now just think about how many times you have seen someone say this or that practice has this or that effect size without also mentioning the very restricted nature of the studied ‘cause’ and measured outcome.

Simply ask ‘effect on what?’ and you have a clear idea of just how limited such meta-analyses actually are.

Randomised Control Trials

Also keep in mind what this form of research can actually tell us about new innovations: nothing directly. This last point applies doubly to the now ubiquitous calls for Randomised Control Trials (RCTs). By definition, RCTs cannot tell us what the effect of an innovation will be simply because that innovation has to already be in place to do an RCT at all. And to be firm on the methodology, we don’t need just one RCT per innovation, but several – so that meta-analyses can be conducted based on replication studies.

This isn’t an argument against meta-analyses and RCTs, but an appeal to be sensible about what we think we can learn from such necessary research endeavours.

Both of these forms of analysis are fundamentally committed to rigorously studying single cause-effect relationships, of the X leads to Y form, since the most rigorous empirical assessment of causality in this tradition is based on isolating the effects of everything other than the designed cause – the X of interest. This is how you specify just what needs to be randomised. Although RCTs in education are built from the tradition of educational psychology that sought to examine generalised claims about all of humanity where randomisation was needed at the individual student level, most reform applications of RCTs will randomise whatever unit of analysis best fits the intended reform. Common contemporary forms of this application will randomise teachers or schools in this or that innovation. The point of that randomisation is to find effects that are independent of the differences between whatever is randomised.

Research shows what has happened, not what will happen

The point of replications is to mitigate against known human flaws (biases, mistakes, etc) and to examine the effect of contexts. This is where our language about what research ‘says’ needs to be much more precise than what we typically see in news editorials and twitter. For example, when phonics advocates say ‘rigorous empirical research has shown phonics program X leads to effect Y’, don’t forget the background presumptions. What that research may have shown is that when phonics program X was implemented in a systemic study, the outcomes measured were Y. What this means is that the claims which can reasonably be drawn from such research are far more limited than zealous advocates hope. That research studied what happened, not what will happen.

Such research does NOT say anything about whether or not that program, when transplanted into a new context, will have the same effect. You have to be pretty sure the contexts are sufficiently similar to make that presumption. (Personally I am quite sceptical about crossing national boundaries with reforms, especially into Australia.)

Fidelity of implementation studies and instruments

More importantly, such studies cannot say anything about whether or not reform X can actually be implemented with sufficient ‘fidelity’ to expect the intended outcome. This reality is precisely why researchers seeking the ‘gold standard’ of research are now producing voluminous ‘fidelity of implementation’ studies and instruments. The Gates Foundation has funded many of these in the US, and I see intended publications from them all the time in my editorial role. Essentially fidelity of implementation measures attempt to estimate the degree to which the new program has been implemented as intended, often by analysing direct evidence of the implementation.

Each time I see one of these studies, it begs the question: ‘If the intent of the reform is to produce the qualities identified in the fidelity of implementation instruments, doesn’t the need of the fidelity of information suggest the reform isn’t readily implemented?’ And why not use the fidelity of implementation instrument itself if that’s what you really think has the effect? For a nice critique and re-framing of this issue see Tony Bryk’s Fidelity of Implementation: Is It the Right Concept?

The reality of ‘evidence-based’ policy

This is where the overall structure of the current push for evidence-based practices becomes most obvious. The fundamental paradox of current educational policy is that most of it is intended to centrally pre-determine what practices occur in local sites, what teachers do (and don’t do) – and yet the policy claims this will lead to the most advanced, innovative curriculum and teaching. It won’t. It can’t.

What it can do is provide a solid basis of knowledge for teachers to know and use in their own professional judgements about what is the best thing to do with their students on any given day. It might help convince schools and teachers to give up on historical practices and debates we are pretty confident won’t work. But what will work depends entirely on the innovation, professional judgement and, as Paul Brock once put it, nous of all educators.

 

James Ladwig is Associate Professor in the School of Education at the University of Newcastle and co-editor of the American Educational Research Journal.  He is internationally recognised for his expertise in educational research and school reform. 

Find James’ latest work in Limits to Evidence-Based Learning of Educational Science, in Hall, Quinn and Gollnick (Eds) The Wiley Handbook of Teaching and Learning published by Wiley-Blackwell, New York (in press).

James is on Twitter @jgladwig

Republish this article for free, online or in print, under Creative Commons licence.

14 thoughts on “Here’s what is going wrong with ‘evidence-based’ policies and practices in schools in Australia

  1. Nick Kelly says:

    This is an excellent piece, many thanks for writing it. You’ve drawn attention to many of the contradictions that we live with. I particularly like the place that you finish up and I hope that many people read these lines:

    “The fundamental paradox of current educational policy is that most of it is intended to centrally pre-determine what practices occur in local sites, what teachers do (and don’t do) – and yet the policy claims this will lead to the most advanced, innovative curriculum and teaching. It won’t. It can’t.”

  2. Ania Lian says:

    Dear Nick
    So if we cannot control the “context” (mind you, the linguists think they can do that) what can we control, and therefore what should research do, at least in terms of pedagogy? We need to push this debate into the next step so that we can contribute to change.
    best wishes
    ania lian
    CDU

  3. Nick Kelly says:

    Hi Ania, a difficult question, and I won’t pretend that there is a simple answer other than saying that there is an epic amount that research can do here (a good answer won’t fit in these comments!), My reading of the article is not as a suggestion that we should abandon empirical research into, say, effect sizes of certain policies and pedagogical approaches… but rather that we should recognise the limitations of this type of research when wielding it in public debate and in formulating policy. Also the point that some people (implicitly) conceive of research and policy as the creation of “immutable mobiles” – ideas/approaches that don’t get changed by context, that scale, that are efficient, etc…. and there are other types of research and policy that work with difference and contextuality (good work by both policy makers and researchers) that should also be recognised and encouraged and that often gets ignored under the banner of “evidence-based”. I can’t provide a succinct response but it’s a good conversation to be having.

  4. Cate Doherty says:

    Very interesting article. Research is important but equally important is a teacher’s intimate knowledge of his/her students’ educational needs and the context in which those students exist. Good teachers use research to make professional judgements about which practice or strategy would work best for individual students in a specific context. Good teachers should always reflect on their own practice and rigorously question research before taking up a new practice.

  5. Ania Lian says:

    Dear Cate
    What does it mean “intimate knowledge”? Like framing students according to what I recall from my 4-year degree? I am weary of calls for “intimate” knowledge and “careful” designs. based on “professional” judgments – in my view these are exactly the terms that need attention in research. I watch my PhD students who are lost with all this and I can feel for a teacher who does not have the same resources to deal with much of this. I dont think that foggy terms will give us peace of mind.

    very best wishes
    ania lian
    CDU

  6. Ania Lian says:

    Thanks James
    You address a lot of important things: I do not really want to get entangled in the detail of the arguments, but the way I solve things for myself is by asking first: “What is it that Education wants to know?” — The synthetic idea assumes that we are all on the same page and all we need is the technical detail. Maybe b/c I have worked in a different paradigm, the words of Gary Thomas truly speak to me: “education academy preoccupied with theory becomes introverted and unadventurous … it becomes a profession obsessed with what-is and what has-been” (2007, p. 92) . I wonder how many of us feel the same way: “Thomas links his arguments to Grayling’s (cited in Thomas, 2007, p. viii) critique of contemporary academic life that Grayling argues to have become industrialised, with an increasing tendency to result in work that is “scholarly and uninspired, ambitious and timid, scrupulous and dim-sighted”. (Lian & Pertiwi, 2017). A question I take home from this debate: What is innovation? A new way to do the same?

    thank you again for a great posting
    ania

  7. David Zyngier says:

    Thanks James for this. Just hope it will be read by policy makers and shakers! Especially your conclusion:
    “The fundamental paradox of current educational policy is that most of it is intended to centrally pre-determine what practices occur in local sites, what teachers do (and don’t do) – and yet the policy claims this will lead to the most advanced, innovative curriculum and teaching. It won’t. It can’t.”

  8. James Ladwig says:

    Thanks for the comments and ideas all,

    There is much more to be said for sure, and I’ll try to flesh out some of this more soon. To me there is no doubt that much of what we are seeing today is a direct consequence of how governments use and deploy research, how educational professional development and curriculum have become marketised and how universities are now governed — among several other factors.

    But we also know a fair bit about how to design different systems in Australia itself. You’ll find inklings of that in historical reminders of reforms past and some bird’s eye reflections. Note that Australia’s international standing was higher in the late 80s and early 90s, when it was still possible for find systems which were not part of the current structures (the impacts of community based curriculum developments in Vic, SA and Tas, was evident in my first reform evaluation study back in 93, of the then National Schools Project).

    The first step for now, I think, it to just face as simple reality — whatever central offices and buffins say, teachers in schools will always adopt what is to hand to get the job done — as best they can (for the most part). It’s what we all do – for the most part.

    Keep and eye on the paradoxes, e.g. from above: we know foggy terms can be used to deflect or hide things. But we also know that attempts to pin down meaning and discipline everyone into the same thinking produces compliance. (Like Ania, I get a bit on edge when people start talking ‘intimate’ knowledge of my kids… I’d wanna know what that meant.)

    And I am sure teachers critically reflect on what they know of research — but I also teach aspirant principals about using data and research, and work with teachers in schools often — so I know we would be a bit silly if we thought teachers generally knew enough about research to stay on top of it all, to a level they’d need to be expert. (There is a reason we need academics — and a reason we need Universities to be truly independent.)

    Beyond that, just consider this thought: systems produce what they are designed to produce.

    Looking forward to ongoing discussions!

  9. Lucy Stinson says:

    I found this article insightful. As a teacher in the classroom we often feel overwhelmed by the ‘evidence” and meta-analysis that dictates pedagogical practices. I agree the contradictions are evident. If innovation means” first different, then better. That is, innovating is a fundamentally different way of doing things that result in considerably better, and perhaps different, outcomes.’ The current outcomes of narrowing of curriculum and time spent in test preparation ignore context, and facilitation of creative learning – or have I missed the point. It all becomes so academic that it ceases to relate to those of us teaching in the classroom.

  10. shani gill says:

    I work in a primary school – the message we are getting from those above us in educational settings is that we have to be using ‘research and evidence based programs.’ I’m making the point that it should be strategies, methods or programs. What’s happening in many schools around me is that people are buying commercial programs at a great rate. Our local leaders are rolling our PD days on some of these commercial programs – they are seen to be better than what we have been/are doing because they are ‘research and evidenced based.’ There’s a tsunami of programs being pushed into our schools with the implication that these are better than current classroom (or intervention) practice. It’s hard work out here on the ground!! And very confusing….To be honest, I’m so over hearing people quoting ‘…..oh but it’s research and evidence based.’ I also ask the question, as a teacher, what am I doing that is NOT research and evidence based??

  11. Jane Hunter says:

    Thank you for writing this timely piece James – it touches on so many issues in regard to ‘what counts as evidence’ and ‘research informed practice’. Shani has ‘hit the nail on the head’ when she details ‘the tsunami of programs being pushed into our schools… it’s hard work … very confusing’. Many teachers and principals in my observation from research conducted in schools – especially over the past two years – find it incredibly difficult to know exactly what to focus on …. the pace is horrendous, little time for reflection, professional judgement not valued. For the first time in my 30 year career in school education in many many education contexts I feel very pessimistic. Not good.

  12. James Ladwig says:

    Lucy, shani, I think I understand teachers concerns and perspectives on this — to the extent I can, I think you are spot on Lucy — in that there are fundamentally different ways to run curriculum systems, and some of those have existed here in Australia in the past (pre-91). One of the things I hope to do before I retire (I have a decade or so) is to help develop work that builds curriculum and pedagogy differently — and show the effects / demonstrate that it can be done and done well, for all kids.

    Part of that will come as people learn more about what has already happened (keep and eye on a PhD coming out of VU on the history of the New Basics — that one isn’t well known enough and been shown to work well.) Part of the fight will come from people like me reminding Australia that it had higher international standing AND more equitable outcomes before the current ‘paradigm’ was rolled out in Australia, starting with what has become known as the neo-liberal policies driving curriculum (I’ve got issues with that labeling but that’s what others call it).

    But most importantly, it will come from a significant push back — and this will go to Jane’s pessimism. (We ain’t done yet, Jane — we have work to do).

    Push back on the vacuous PD and the marketisation of shallow trivial crap — frankly. Push back on the credentialing being driven by bureaucratic logic. Push back when you think the research isn’t helpful. Push back on measuring EVERYTHING.. Don’t get me wrong, I will defend solid research based on solid measurement — but I also know the limits of what that can and should do.

    I should point out, the meta-analyses are not always wrong. They are at their best in showing what actually doesn’t have positive outcomes even though everyone thinks this or that practice is ok. And this will go to shani’s question re what isn’t research based.

    Let’s be clear, There ARE some common practices that are NOT good — and we know that. If you are in a school that groups by ‘ability’ AND that grouping becomes anything more than very temporary, then the effects are overall negative, not positive. The meta-analyses from HAttie are ok on this — but also no where near as complete at the full literature in that area.. And yet — the now national curriculum is structure in a way that makes this form of bureaucratic stratification of kids almost inevitable.

    That said – virtually all (there are a couple exceptions) of that research hasn’t been done in Australia — so there is the long shot that effects in Australia differ from the rest of the planet. Surely the details of the practice will be somewhat unique. Just talk to principals — most know the research, but feel stuck in between what they know and what the community expects. And this is where many of our academic colleagues are not being responsible, frankly. And some are operating under pure ideological blinders — but that’s something people like me need to address. Teachers will be stuck in the cross-fire for a while on many of these issues.

    In the meantime — it is teachers looking after our kids for many hours of the day — and we need to support them to do what we actually do know is good for kids.

    Want a good comparison to bring it into relief? Compare how school policy is structured to that you can see on the ABC show ‘Employable Me’. On the show, you see ‘experts’ doing their best to find out what kids are good at, interested in, capable of doing productively that is unique and valued.

    That, to me, is how you educate kids. But is that how we have organised schools, curriculum, assessment?

    The last time I saw a systematic effort of a school to give kids the opportunity to have a go at something they’d never done before, just to have a go — and show how it went in front of their peers — who were all incredibly supportive and positive (because they we all in the same boat), I saw young people finding voice, new talents, skills, and happiness — fulfillment, purpose. It brought me to tears of joy. Literally.

    I’ll write that story fully someday — but I have seen glimmers of it in many schools. NEVER have I seen it with the conventional curriculum / policy.

    So yes, it is time to push back. Jane — we need ya. Anyone who understand this story, we need ya.

  13. Freda Flanberg says:

    James, you might be interested in a different spin on the problems with meta-analysis and meta-meta analysis. The argument is that ‘effect size’ in these enterprises is taken to be a measure of the effectiveness of the intervention, but that this is just not the case: it is a category error. There is a podcast about it based on a recent paper by Adrian Simpson from Durham University (UK)

Comments are closed.

Discover more from EduResearch Matters

Subscribe now to keep reading and get access to the full archive.

Continue reading