evidence-based practices

Escape Oppression Now: Disrupt the Dominance of Evidence-Based Practice

Evidence-Based Practice dominates every Australian education system facilitated through government and non-government organisations including NSW’s Centre for Education Statistics and Evaluation (CESE), the national and independent Evidence for Learning, and the all-encompassing Australian Education Research Organisation (AERO). In other human fields, EBP has been questioned, challenged, and modified or even replaced while Australia’s education systems continue to promote a narrow base of evidence as ‘what works’ for student achievement.

At the recent AARE/ATEA/ACDE event “What counts as evidence in teacher education research and policy?” one point for action raised was the need to pushback on its dominance.

This disruption is the focus of a project I have been developing with colleagues at the University of Sydney.

What’s the problem?

More than twenty years of critique on EBP exists in academia, alerting us to problems for the teaching profession, initial teacher education (ITE), student learning, wellbeing, and life outcomes, democracy, and more. Central to the problems of EBP is the removal of discussion on the purpose of education and in turn the limiting of education to learning.

Perhaps the most pernicious problem is the simplification of practice that is immensely complex

Teaching is a non-causal practice but EBP relies on causal research, with random controlled trials as the gold standard. That neglects the breadth of research that supports understanding of – and engagement with-  the complexity of teaching. The range of teaching approaches become limited to  options offered from causal research. 

The children and young people who do not comply directly or indirectly to respond in the pre-determined manner are problematised and excluded rather than looking at the full breadth of evidence from practice to problem solve and design action to support stronger relationships between teaching and learning

One of the most inequitable education systems in the world

Teachers’ work is simplified, supporting arguments that anyone can teach. ITE is denigrated for developing pre-service teacher (PST) ability to engage with the complexity of teaching. It alsosupports a  return to the reproductive model of teacher training absent of critical thinking, reflection, and engagement with theory and research. Ultimately, the status quo is maintained, along with the ranking of Australia as one of the most inequitable education systems in the world.

EBP limits teaching approaches by sacrificing  teacher autonomy for claims of causality. The prioritised practice is a conceptualisation of explicit teaching positioned in the camp of direct instruction (see the CESE definition of explicit teaching and representations of explicit teaching by the NSWDoE in the Sydney Morning Herald compared to that articulated clearly in the Ambassador Schools Project). 

Positioning explicit teaching (ie. direct instruction in this case) in opposition and superiority over inquiry-based teaching, creates a false binary. This is constructed through misunderstanding and misrepresentation of inquiry based teaching. It neglects  the essential inclusion of explicit teaching within inquiry based teaching along with a range of approaches necessary to build relationship between teaching and learning with the diversity of students. 

Other professions have questioned, challenged, even moved on from EBP. Social work has recognised the damage of EBP as ‘evidence-based oppression’ through neglect for attention to structural issues in society favouring the neoliberal focus on individuals and individual responsibility.

What is needed in education to pushback on the dominance of evidence-based practice?

Broad understanding of the problem is needed beyond academic discourse. We have over twenty years of academic critique of EBP. Yet it  it rarely reaches professional media for teachers, school leaders and other education stakeholders to access. It rarely reaches mainstream media for parents/carers and the broader general public.

False claims need to be highlighted. Amongst the many falsehoods espoused in the construction of EBP’s dominance is the absence of evidence in EBP claims.

In the subordination of teachers, research is pre-digested into easy-to-read summaries for teachers to know the practices being prescribed are ‘evidence-based’. Such pre-digestion of research is selective presentation of evidence to promote desired practices. It further removes teachers from engagement with research evidence. 

AERO’s latest guide Assessing whether evidence is relevant to your context – For educators, teachers and leaders directs teachers to AERO’s own materials. One document referred to is Formative assessment: Know where your students are in their learning which simplifies the research on formative assessment to consideration of just six papers summarised in two pages. It neglects key aspects and oversimplifies leading to errors. Further examples of ‘evidence’ disseminated to teachers in pre-digested formats include CESE’s Cognitive load theory: Research that teachers really need to understand which is based on a paper widely critiqued as a strawman fallacy. CESE’s paper is then relied upon by AERO in their presentation of evidence for cognitive load theory to teachers.

AERO leads to more AERO

Teaching must be valued for the complex, ‘problematic’ practice that it is. Wider understanding is needed on how teachers have and do use evidence to build relationship between teaching and learning to support other teachers and school leaders, along with teacher educators and PSTs. Teachers have been making evidence informed decisions for action long before the emergence of EBP. Matthew Clarke reminded us at the AARE/ATEA/ACDE event that evidence is not proof, and that evidence cannot speak for itself, rather evidence must be interpreted. 

We need to reclaim and clarify

Teachers are surrounded by evidence and analyse evidence to inform teaching for student learning. Recognising that teaching is non-causal requires teachers to draw together a range of evidence to help them build relationship between teaching and student learning. EBP dominance is hindering teachers’ opportunities to utilise the full range of evidence necessary to teach children and young people.

We need to reclaim and clarify the language of Evidence Informed Practice (EIP): drawing on the work of Helen Timperley who presented EIP as involving integrated analysis of evidence from research, evidence from teaching, evidence from students to make decisions for further action within an ongoing cycle of practice whereby further evidence is collected through action. EIP involves practitioners in the collection and analysis of evidence to make decisions for action with broader consideration to the purposes of education. It utilises evidence in consideration to the context and the possibilities from other contexts. Evidence Informed Practice recognises a broad range of evidence including a much broader value for the diversity of research than EBP’s reliance on causal research. EIP is research in itself and when formalised and shared enables practice to feed back into research and policy development.

So, what are we doing?

First is a forthcoming paper tracking how education has come to be in this position of EBP dominance drawing together the breadth of academic critique.

Next is a multi-stakeholder workshop that will happen later this year, leading to the development of a green paper for public consultation to inform the development of a white paper to give school leaders, policymakers, and others a basis on which to pushback on the dominance of EBP and strength to develop their EIP.

From there will be a program of research. Pivotal will be case studies of EIP in action in schools to share insight to the complexities of practice, the scope for how teachers engage in EIP, and the wide-ranging benefits for children, young people, teachers, and society. The case studies will provide further basis for teachers and schools to pushback on the dominance of EBP and guidance in using EIP. From there we will work with schools to support practitioner inquiry to develop EIP. Threaded through this program of research will be ongoing exploration of work with PSTs positioning them as agents for change in the transition from EBP to EIP through the development of reciprocal learning during professional experience and into their early career teaching.


Nicole Brunker is a senior lecturer in the School of Education and Social Work, The University of Sydney. She was a teacher and principal before moving into Initial Teacher Education where she has led foundational units of study in pedagogy, sociology, psychology and philosophy. Her research interests include school experience, alternative paths of learning, Initial Teacher Education pedagogy, and innovative qualitative methodologies. Current research projects include the diversity of pre-service teacher apprenticeships of observation and disrupting evidence-based practice in education. You can find her on LInkedIn and on X:

One provocative question: what on earth does evidence-based really mean?

This post was written before Alan Tudge took leave from his position as the Minister for Education. But he’s not the only one to bang on about ‘evidence’ without really being clear what he means.

There can be little argument in wanting university Schools of Education to impart to their students, knowledge premised on systematically-acquired evidence. It is irrefutable that teacher educators want their students to leave university and enter the classroom confident in the delivery of best practices. However, the requirement for ‘evidence based-practice’ is in danger of becoming a political polemic in which knowledge may be obfuscated by ideology, rather than being the outcome of systematic investigation.  

Writing in The Australian,Paul Kelly ‘reflects’ on the then Federal Education Minister, Alan Tudge’s ‘drive’ to ensure universities impart, ‘…evidence-based practices such as phonics and explicit teaching instruction methodologies.’ The former Minister issues a warning that he will use, ‘the full leverage’ of the Federal Government’s $760m budget to insist, ‘…evidence-based practices are taught…’ in universities. Yet, the threat is based more on assumption that evidence-based practices are not being taught in our universities, than any substantial evidence that they are not. 

It is ironic the former Minister should argue for something on the basis of a lack of evidence. Aside from this point, questions arise around the nature of evidence the former Minister considers to be bona fide in relation to practice. This is an issue around which there is a distinct lack of clarity. The former Minister clearly states what he does not want, which includes: sociology and socio-cultural theory. His wish to see the demise of branches of thinking are questionable, given that it is usually dictatorial regimes that close down thought in their nation’s academies. He wants a tightly prescriptive curriculum for teacher education. In this respect, he appears to be following the Conservative administration of Boris Johnson in Britain, where a similar proposal has been tabled for English universities, resulting in some of the world’s top universities describing the plan as deeply flawed and having damaging consequences If Boris Johnson wants something and Oxford and Cambridge consider it fool-hardy, the weight of opinion surely tilts in favour the academies. 

The point remains as to the kind of ‘evidence’ upon which evidence-based practice is premised. What may pass as ‘evidence’ is not necessarily bona fide ‘knowledge’. All research, including educational research, involves knowledge that is acquired by means of rigorous, systematic investigation within clearly defined parameters. Even so, the outcomes of an investigation may be influenced by a number of factors, including: ontological perspective; the framing of the research questions; methodological approaches; analytical methods; researcher interpretation and the degree to which any funding body remains impartial. Ultimately, before it can take its place in the pantheon of evidence, research must be interrogated by means of independent peer-review and subsequently published in a highly respected discipline relevant journal. Even then, sometimes what may appear to be good evidence can prove to be disastrous in its outcomes. We do not know if the ‘evidence’ to which the former Minister refers, satisfies these requirements. What is certain is that the ‘evidence’ used by Paul Kelly to suggest universities are ‘failing’ their students and the nation’s schools, does not meet most of these standards of respected research. 

It was an Australian doctor, William McBride, who in 1961, published a letter in The Lancet, suggesting that thalidomide had negative consequences and drew attention to the possible fallacy of evidence. Randomised control trials (RCTs) of the drug in rats had proven effective for controlling for morning sickness, but it took observation of multiple cases to prove the drug was not fit for purpose. 

So, what kind of ‘evidence’ is being referred to by the former Minister when he rightly insists we need to ensure that pedagogy is evidence-based’. Is he referring to evidence derived from primary research, such as randomized control trials (RCTs) and observational studies; or secondary research, including systematic reviews of the research literature? The fact is there is no single type of evidence. It is generally recognised that different evidence types have different methodological strengths. At the pinnacle of the ‘hierarchy of evidence’, are systematic reviews, followed by RCTs, cohort studies and then case-controlled studies, case reports and finally expert opinion. Without identifying the type of evidence to which he refers, the former Minister, appears to resort to lay-opinion disguised as evidence. 

Without a clarity of thought, political policy, based on vague supposition, could lead to prescriptive measures that result in ‘damaging consequences’. As the thalidomide example cited above demonstrates, a single type of evidence is not always sufficient proof, and multiple types of evidence may be necessary to triangulate knowledge. Rather than denouncing certain disciplines of thought and prescribing others, perhaps the way forward is to systematically interrogate different types of evidence in order to evaluate their efficacy, as bona fide knowledge. The best way to do this is by means of teacher-academics and teacher-practitioners working collaboratively, across multiple settings, engaging in systematic research, and cross-referencing results. For this to happen, there needs to be a commitment by government to fund, not cut, educational research. Australia has some of the finest Schools of Education in the world; they are staffed by dedicated academics who want the best for their students and the best for the nation’s school children. What universities need is a knowledge-rich government, not political polemic that does not even reach the baseline of the ‘hierarchy of evidence’. 

                       

Paul Gardner is a Senior Lecturer in the School of Education at Curtin University. He is also the United Kingdom Literacy Association’s Country Ambassador to Australia. Paul specialises in research around writer identity; compositional processes and pedagogy. He has written about process drama, EAL/D, multicultural education and collaborative learning. His work is premised upon inclusion and social justice.  Twitter @Paugardner

The problem with using scientific evidence in education (why teachers should stop trying to be more like doctors)

For teachers to be like doctors, and base practice on more “scientific” research, might seem like a good idea. But medical doctors are already questioning the narrow reliance in medicine on randomised controlled trials that Australia seems intent on implementing in education.

In randomised controlled trials of new drugs, researchers get two groups of comparable people with a specific problem and give one group the new drug and the other group the old drug or a placebo.  No one knows who gets what. Not the doctor, not the patient and not the person assessing the outcomes. Then statistical analysis of the results informs guidelines for clinical practice. 

In education, though, students are very different from each other. Unlike those administering placebos and real drugs in a medical trial, teachers know if they are delivering an intervention. Students know they are getting one thing or another. The person assessing the situation knows an intervention has taken place. Constructing a reliable educational randomised controlled trial is highly problematic and open to bias.

As a doctor and teacher thinking, writing and researching together we believe that a more honest understanding of the ambivalences and failures of evidence-based medicine is essential for education.

Before Australia decides teachers need to be like doctors, we want to tell you what is happening and give you some reasons why evidence based medicine itself is said to be in crisis

1. Randomised controlled trials are just one kind of evidence

Medicine now recognises a much broader evidence base than just randomised controlled trials. Other kinds of medical evidence include: practical “on-the-job” expertise; professional knowledge; insights provided by other research such as case studies; intuition; wisdom gained from listening to patient histories and discussions with patients that allow for shared decision-making or negotiation.

Privileging randomised controlled trials allows them to become sticks that beat practitioners into uniformity of practice, no matter what their patients want or need. Such practitioners become “cookbook” doctors or, in education, potentially, “cookbook” teachers. The best and most recent forms of evidence based medicine value a broad range of evidence and do not create hierarchies of evidence. Education policy needs to consider this carefully and treat all forms of evidence equally.

2. Medicine can be used as a bully

Teaching is a feminised profession, with a much lower status than medicine. It is easy for science to exert a masculinist authority over teachers, who are required to be ever more scientific to seem professional.  They are called on to be phallic teachers, using data, tools, tests, rubrics, standards, benchmarks, probes and scientific trials, rather than “soft” skills of listening, empathising, reflecting and sharing.

A Western scientific evidence-base for practice similarly does not value Indigenous knowledges or philosophies of learning. Externally mandated guidelines also negate the concepts of student voice and negotiated curriculum. While confident doctors know the randomised controlled trial-based statistics and effect sizes need to be read with scepticism, this is not so easy for many teachers. If randomised controlled trial-based guidelines are to rule teaching, teachers will also potentially be monitored for compliance with guidelines they may not fully understand or accept, and which may potentially harm their students.

3. Evidence based medicine is about populations, not people

While medical randomised controlled trials save lives by demonstrating the broad effects of interventions, they make individuals and their needs harder to perceive and respect.  Randomised controlled trial-based guidelines can mean that diverse people are forced to conform to simplistic ideals. Rather than starting with the patient, the doctor starts with the rule. Is this what we want for teaching? When medical guidelines are applied in rigid ways, patients can be harmed.

Trials cannot be done on every single kind of person and so inevitably, many individuals are forced to have treatments that will not benefit them at all, or that are at odds with their wishes and beliefs. Educators need to ensure that teachers, not bureaucrats or researchers, remain the authority in their classrooms.

5. Scientific evidence gives rise to gurus

Evidence-based practice can give rise to the cult of the guru. Researchers such as John Hattie, and their trademarked programs like “Visible Learning” based on apparently infallible science, can rapidly colonise and dominate education. Yet their medicalised glamour disguises the reality that there is no universal and enduring formula for “what works”.

In 2009, in his book Visible learning: A synthesis of over 800 meta-analyses relating to achievement Hattie advised that, based on evidence, all healthy people should take aspirin to prevent heart attacks. Yet also in 2009, new medical evidence “proved” that the harms in healthy people taking aspirin outweigh the benefits.

In 2009 Hattie said class size does not matter. In 2014, further research found that reducing class size has an important and lasting impact, especially for students from disadvantaged backgrounds.

While medical-style guidelines may seem to have come from God, such guidelines, even in medicine are often multiple and contradictory. The “cookbook” teacher will always be chasing the latest guideline, disempowered by top-down interference in the classroom.

In medicine, over five years, fifty percent of guideline recommendations are overturned by new evidence. A comparable situation in education would create unimaginable turmoil for teachers.

6. Evidence-based practice risks conflicts of interest

Educational publishers and platforms are very interested in “scientific” evidence.  If a researcher can “prove” an intervention works and should be applied to all, this means big dollars. Randomised controlled trials in medicine routinely produce outcomes that are to the benefit of industry. Only certain trials get funded. Much unfavourable research is never published. Drug and medical companies set agendas rather than responding to patient needs, in what has been described as a guideline “factory”.

Imagine how this will play out in education. Do we want what happens in classrooms to be dictated by profit driven companies, or student-centred teachers?

What needs to happen?

We call for an urgent halt to the imposition of ‘evidence-based’ education on Australian teachers, until there a fuller understanding of the benefits and costs of narrow, statistical evidence-based practice. In particular, education needs protection from the likely exploitation of evidence-based guidelines by industries with vested interests.

Rather than removing teacher agency and enforcing subordination to gurus and data-based cults, education needs to embrace a wide range of evidence and reinstate the teacher as the expert who decides whether or not a guideline applies to each student.

Pretending teachers are doctors, without acknowledging the risks and costs of this, leaves students consigned to boring, standardised and ineffective cookbook teaching. Do we want teachers to start with a recipe, or the person in front of them?

Here is our paper for those who want more: A broken paradigm? What education needs to learn from evidence-based medicine by Lucinda McKnight and Andy Morgan

Dr Lucinda McKnight is a pre-service teacher educator and senior lecturer in pedagogy and curriculum at Deakin University, Melbourne. She is also a qualified health and fitness professional. She is interested in the use of scientific and medical metaphor in education. Lucinda can be found on Twitter@LucindaMcKnigh8

Dr Andy Morgan is a British Australian medical doctor and senior lecturer in general practice at Monash University, Melbourne. He has an MA in Clinical Education from the Institute of Education, UCL, London. His research interests are in consultation skills and patient-centred care. He is a former fellow of the Royal College of General Practitioners, and current fellow of the Australian Royal College of General Practitioners.



What’s good ‘evidence-based’ practice for classrooms? We asked the teachers, here’s what they said

Calls for Australian schools and teachers to engage in ‘evidence-based practice’ have become increasingly loud over the past decade. Like ‘quality’, it’s hard to argue against evidence or the use of evidence in education, but also like ‘quality’, the devil’s in the detail: much depends on what we mean by ‘evidence’, what counts as ‘evidence’, and who gets to say what constitutes good ‘evidence’ of practice.

In this post we want to tell you about the conversations around what ‘evidence’ means when people talk about evidence-based practice in Australian schools, and importantly we want to tell you about our research into what teachers think good evidence is.

Often when people talk about ‘evidence’ in education they are talking about two different types of evidence. The first is the evidence of teacher professional judgment collected and used at classroom level involving things like student feedback and teacher self-assessment. The second is ‘objective’ or clinical evidence collected by tools like system-wide standardised tests.

Evidence of teacher professional judgment

This type of evidence is represented in the Australian Teacher Performance and Development Framework. For example, the framework suggests that good evidence of teachers’ practice is rich and complex, requiring that teachers possess and use sharp and well-honed professional judgement. It says: “an important part of effective professional practice is collecting evidence that provides the basis for ongoing feedback, reflection and further development. The complex work of teaching generates a rich and varied range of evidence that can inform meaningful evaluations of practice for both formative and summative purposes” (p.6). It goes on to suggest that sources of this kind of evidence might include observation, student feedback, parent feedback and teacher self-assessment and reflection, among others.

‘Objective’ evidence

The second discussion around evidence promotes good evidence of practice as something that should be ‘objective’ or clinical, something that should be independent of the ‘subjectivity’ of teacher judgement. We see this reflected in, for example, the much lauded “formative assessment tool” announced in the wake of Gonski 2.0 and to be developed by KPMG. The tool will track every child and ‘sound alarms’ if a child is slipping behind. It aims to remedy the purportedly unreliable nature of assessment of student learning that hasn’t been validated by standardising formative assessment practices. Indeed, the Gonski 2.0 report is very strongly imbued with the idea that evidence of learning that relies on teacher professional judgement is in need of being overridden by more objective measures.  

But what do teachers themselves think good evidence is?

We’ve been talking to teachers about their understanding and use of evidence, as part of our Teachers, Educational Data and Evidence-informed Practice project. We began with 21 interviews with teachers and school leaders in mid-2018, and have recently run an online questionnaire that gained over 500 responses from primary and secondary teachers around Australia.

Our research shows that teachers clearly think deeply about what constitutes good evidence of their practice. For many of them, the fact that students are engaged in their learning provides the best evidence of good teaching. Teachers were very expansive and articulate about what the indicators of such engagement are:

I know I’m teaching well based on how well my students synthesise their knowledge and readily apply it in different contexts. Also by the quality of their questions they ask me and each other in class. They come prepared to debate. Also when they help each other and are not afraid to take risks. When they send me essays and ideas they might be thinking about. Essentially I know I’m teaching well because the relationship is positive and students can articulate what they’re doing, why they’re doing it and can also show they understand, by teaching their peers. (Secondary teacher, NSW)

Furthermore, teachers know that ‘assessment’ is not something that stands independent of them – that the very act of using evidence to inform practice involves judgement. Their role in knowing their students, knowing about learning, and assessing and supporting their students to increase their knowledge and understanding is crucial. Balanced and thoughtful assessment of student learning relies on knowledge of how to assess, and of what constitutes good evidence.

Good evidence is gathering a range of pieces of student work to use to arrive at a balanced assessment. I believe I am teaching well when the student data shows learning and good outcomes. (Primary teacher, SA)

Gathering good evidence of teaching and learning is an iterative process, that is it is a process of evaluating and adjusting that teachers constantly repeat and build on. It is part of the very fabric of teaching, and something that good teachers do every day in order to make decisions about what needs to happen next.

I use strategies like exit cards sometimes to find out about content knowledge and also to hear questions from students about what they still need to know/understand. I use questioning strategies in class and make judgements based on the answers or further questions of my students. (Secondary teacher, Vic)

I get immediate feedback each class from my students.  I know them well and can see when they are engaged and learning and when I’m having very little effect. (Secondary teacher, Qld)

Where does NAPLAN sit as ‘evidence’ for teachers?

Teachers are not afraid to reflect on and gather evidence of their practice, but too often, calls for ‘evidence-based practice’ in education ignore the evidence that really counts. Narrow definitions of evidence where it is linked to external testing are highly problematic. While external testing is part of the puzzle, it can be harmful to use that evidence for purposes beyond what it can really tell us – as one of us has argued before. And the teachers in our study well understood this. For them, NAPLAN data, for instance, was bottom of the list when it comes to evidence of their practice, as seen in the chart below.

This doesn’t mean they discount the potentially, perhaps partially, informative value in such testing (after all, about 72% think it’s at least a ‘somewhat’ valid and reliable form of evidence), but it does mean that, in their view, the best evidence is that which is tied to the day to day work that goes on in their classrooms.

Evidence rated from not useful to extremely useful by teachers in our survey

Teachers value a range of sources of evidence of their practice, placing particular emphasis on that which has a front row seat to their work, their own reflections and observations, and those of the students they teach. Perhaps this is because they need this constant stream of information to enable them to make the thousands of decisions they make about their practice in the course of a day – or an hour, or a minute. The ‘complex work of teaching’ does not need a formalised, ‘objective’ tool to help it along. Instead, we need to properly recognise the complexity of teaching, and the inherent, interwoven necessity of teacher judgement that makes it what it is.

What do teachers want?

Teachers were very clear about what they didn’t want.

Teachers are time poor. We are tired. It sounds good to do all this extra stuff but unless we are given more time it will just be another layer of pressure. (Secondary teacher, NSW)

Teachers believe in and want to rely on useful data but they don’t have the time to do it well. (Primary teacher, NSW)

It must be practical, helpful and not EXTRA. (Primary teacher, Vic)

They don’t want “extra stuff” to do.

They want relevant, high quality and localised professional learning. They want to better understand and work with a range of forms of useful data and research. They particularly find in-school teacher research with support useful, along with access to curated readings with classroom value. Social media also features as a useful tool for teachers.

Our research is ongoing. Our next task is to work further with teachers to develop and refine resources to support them in these endeavours.

We believe teachers should be heard more clearly in the conversations about evidence; policy makers and other decision-makers need to listen to teachers. The type of evidence that teachers want and can use should be basic to any plan around ‘evidence-based’ or ‘evidence-informed’ teaching in Australian schools.

Dr Nicole Mockler is Associate Professor of Education, at the Sydney School of  Education and Social Work at the University of Sydney. She is a former teacher and  school leader, and her research and writing primarily focuses on education policy and  politics and teacher professional identity and learning. Her recent scholarly books  include Questioning the Language of improvement and reform in education: Reclaiming  meaning (Routledge, 2018) and Engaging with student voice in research, education and  community: Beyond legitimation and guardianship (Springer 2015), both co-authored  with Susan Groundwater-Smith. Nicole is currently Editor in Chief of The Australian  Educational Researcher.Nicole is on Twitter @nicolemockler

Dr Meghan Stacey is a lecturer in the sociology of education and education policy in the School of Education at the University of New South Wales. Taking a particular interest in teachers, her research considers how teachers’ work is framed by policy, as well as the effects of such policy for those who work with, within and against it. Meghan completed her PhD with the University of Sydney in 2018. Meghan is on Twitter@meghanrstacey

QandA:‘what works’ in ed with Bob Lingard, Jessica Gerrard, Adrian Piccoli, Rob Randall,Glenn Savage (chair)

See the full video here

Evidence, expertise and influence are increasingly contested in the making of Australian schooling policy.

More than ever, policy makers, researchers and practitioners are being asked to defend the evidence they use, justify why the voices of some experts are given preference over others, and be critically aware of the networks of influence that determine what counts as evidence and expertise.

The release of the ‘Gonski 2.0’ report raises a number of complex questions about the use of evidence in the development of schooling policies, and the forms of expertise and influence that are increasingly dominant in shaping conversations about the trajectory of schooling reform.

The report signals an ever-increasing presence of federal government influence in shaping schooling policy in Australia’s federal system. It also strongly reflects global shifts towards a “what works” reform narrative, which frames policy decisions as only justifiable in cases where there is evidence of demonstrable impact.

Proposals such as the creation of a ‘national research and evidence institute’ by the Labor party, and related proposals by the Australian Productivity Commission to create a national ‘education evidence base’, signal a potentially new era of policy making in Australia, in which decisions are guided by new national data infrastructures and hierarchies of evidence.

These developments raise serious questions about which kinds of evidence will count (and can be counted) in emerging evidence repositories, which experts (and forms of expertise) will be able to gain most traction, how developments might change the roles of federal, state and national agencies in contributing to evidence production, and the kinds of research knowledge that will (or will not) be able to gain tradition in national debates.

On November 6th, I hosted a Q&A Forum at the University of Sydney, co-sponsored by the AARE ‘Politics and Policy in Education’ Special Interest Group and the School and Teacher Education Policy Research Network at the University of Sydney.

It featured Adrian Piccoli (Director of the UNSW Gonski Institute for Education), Jessica Gerrard (senior lecturer in education, equity and politics at the University of Melbourne), Bob Lingard (Emeritus Professor at the University of Queensland and Professorial Research Fellow at the Australian Catholic University) and Rob Randall (CEO of the Australian Curriculum, Assessment and Reporting Authority).

What follows is an edited version of the event, featuring some key questions I posed to the panelists and some of their highlight responses.

See the full video here

Glenn: I want to start by considering the changing role and meaning of ‘evidence’ and how different forms of evidence shape conditions of possibility for education. What do you see as either the limits or possibilities of “what works” and “evidence-based” approaches to schooling reform?

Bob: It seems to me the ‘what works’ idea works with a sort of engineering conception of the relationship between evidence, research, policy making and professional practice in schools, and I think it also over simplifies research and evidence … I would prefer a relationship between evidence (and evidences of multiple kinds) to policy and to practice which was more of an enlightenment relationship rather than an engineering one … I think policy making and professional practice are really complex practices, and I think we can only ever have evidence-informed policy and evidence-informed professional practice, I don’t think we can have evidence-based … I think ‘what works’ has an almost inert clinical construction of practice. And I think there’s an arrogant certainty.

Adrian: The problem with the ‘what works’ movement is that it lends itself, particularly at a political level, to there being a ‘silver bullet’ to education improvement and the thing you launch the silver bullet on is a press release. I’ve always said the press release is the greatest threat to good education policy because it sounds good, in the lead up to an election, to say things like ‘independent public schools work’ so fund them, or it might be a phonics check, so let’s fund this because it works, but I think it lends itself to that kind of one-dimensional approach to education policy. But education reform is an art. What makes the painting great? It’s not the blue or the yellow or the red, it’s actually the right combination of those things. Education, at a political level, people can try to boil it down to things that are too simple.

Rob: I actually think the term [what works] is a useful term. If I go back to when I first started teaching, it’s a good question, ‘what works?’ Can you give me some leads? It’s not a matter of saying ‘this is it entirely’, but we’ve got to be careful of how the language enables us and not continue to diss it.

Glenn: NSW has created its Centre for Education Statistics and Evaluation, which describes itself as Australia’s first ‘data hub’ in education that will tell us “what works” in schools and ensure decisions are evidence-informed. On the Centre’s website, it tells us that NSW works with the concept of ‘an evidence hierarchy’. On top of the hierarchy is ‘the gold standard’, which includes either ‘meta analyses’ or ‘randomised controlled trials’. To me this begs a question: how might the role of researchers be shifting now ‘the best’ evidence is primarily based on large-scale and quantitative methods?

Jess: To me it’s a funny situation to be in when your bread and butter work is producing knowledge and evidence but you find yourself arguing against the framing and enthusiastic update of something like ‘evidence-based policy’. Particularly concerning is this hierarchical organisation of evidences where randomised controlled trials, statistical knowledge and other things like meta analyses are thought to be more certain, more robust, more concrete than other forms of research knowledge, such as qualitative in-depth interviews with school teachers about their experiences. The kind of knowledge that is produced through a statistical or very particular causal project becomes very narrow because it has to bracket out so many other contextual factors in order to produce ‘a certainty’ about social phenomena. We can’t rely on a medical model, where RCTs come from, for something like classroom practice, and you can see this in John Hattie’s very influential book Visible Learning. You just have to look at the Preface where he says that he bracketed out of his study any factor that was out of school. When you think about that it becomes unsurprising that the biggest finding is that teachers have the most impact, because you’ve bracketed out all these other things that clearly have an impact … With the relationship between politics and policy, I think it’s really interesting that, politically speaking, evidence-based policy becomes very popular around some reforms, yet not around other reforms, so school autonomy, great example, there’s no evidence to say that has a positive impact on student achievement but yet it gets rolled out, there’s no RCT on that, there’s no RCT on the funding of elite private schools, but yet we do these things. I think we can get into a trap of ‘policy-led evidence’ when political interests try to wrestle evidence for their own purposes.

Glenn: Let’s consider which ‘experts’ tend to exert the most influence in schooling. For example, a common claim is that some groups and individuals might get more of a say than others in steering debates about schooling. In other words, not everyone ‘gets a seat at the table’ when decisions are made – and if they do, voices are not always equally heard. A frequent criticism, for example, is that certain thinks tanks or lobby groups, or certain powerful and well-connected individuals, are often able to exert disproportionate power and influence. Would any of you like to comment on those dynamics and the claim that it might not be an even playing field of influence?

Bob: I think ‘think tank research’ is very different from the kind of research that’s done by academics in universities. The think tank usually has a political-ideological position, it usually takes the policy problem as given rather than thinking about the construction, I think it does research and writes reports which have specific audiences in mind, one the media and two the politicians. I remember once when I did a report for a government and the minister told me my problem was that I was ‘two-handed’. I’d say ‘on the one hand this might be the case, and on the other hand…’, but what he wanted was one-handed research advice, and I think in some ways the think tanks, that’s what they do.    

Glenn: Another important dimension here is that even when one’s voice is heard, often what ‘the public’ hears is far from the full story. And I think this is where we need to consider the role of the media and the 24-hour news cycle we now inhabit. For example, so much of what we hear about ‘the evidence’ driving schooling reform is filtered through the media; but this is invariably a selective version of the evidence. Do any of you have any thoughts or reflections on this complex dynamic between the media, experts, evidence and policy?

Adrian: Good education policy is really boring, right? It’s boring for the Daily Telegraph, it’s boring for the Sydney Morning Herald, it’s boring for the ABC, Channel 7, it’s boring. You talk curriculum, you talk assessment, you talk pedagogy, I mean when was the last time you saw the ‘pedagogy’ word in a news article? … what’s exciting is ‘you know what, here’s the silver bullet’ … and the public and media and the political process doesn’t have the patience for sound evidence-based education reform.

Rob: I think we’re at risk of underestimating the capability of the profession in terms of interpreting and engaging with this. I think we’re at risk of under-estimating the broader community.

Glenn: To me, it seems there’s something peculiar in terms of how expertise about education is constructed. For example, in the medical profession, many would see the expertise as lying with the practitioners themselves, the doctors, surgeons, and so on, who “possess” the expertise and are, therefore, the experts. If education mirrored this, then surely the experts would be the teachers and school leaders – and expertise would lie in their hands? But this often seems to be far from the way expertise is talked about in schooling. Instead, it seems the experts are often the economists, statisticians and global policy entrepreneurs who have little to do with schools. Why is it that the profession itself seems to so often be obscured in debates about expertise and schooling reform?

Jess: What we see now is because education and schooling is such a politically invested enterprise, with huge money attached to it, it’s never really been wrestled from the hands of government in terms of a professional body. So, a body like AITSL, for instance, which is meant to stand in as a kind of professional body, isn’t really representative of the profession, it doesn’t have those kinds of links to teachers themselves as the medical equivalent does. So, we’re in a curious state of affairs, I think you’re right Glenn, where who counts as having expertise are often not those who are within the street level, within the profession … We don’t have enough of an opportunity to hear from teachers themselves, to have unions and teachers as part of the public discussion, and when they are a part of the discussion they’re often positioned as being argumentative or troublesome as opposed to contributing to a robust public debate about education.

Bob: As we’ve moved into the kind of economies we have, the emphasis on schooling as human capital and so on, it is those away from schooling, the economists and others, who I think have formulated the big macro policy, rather than the knowledge of the profession.

Glenn: Up to this point we’ve been mainly talking about influence in terms of specific individuals, or groups, but also I think certain policies and forms of data also exert significant influence. I need only mention the term NAPLAN in front of a group of educators to inspire a flood of conversations (and often polarised opinion) about how this particular policy and its associated data influence their work. Is it a stretch to say that these policy technologies and data infrastructures now serve as political actors in their own right? Is there a risk when we start seeing data itself as a “source of truth” beyond the politics of its creation?

Jess: I think it’s absolutely seen in that way and I think that’s the problem with the hierarchy of knowledge or evidence. There’s a presumption that these so-called higher or more stable forms of knowledge can stand above the messiness of everyday life in schools or the complexity of social and cultural phenomena … there’s no way a number can convey the complexity, but because they seem so tantalisingly certain, they then have a life of themselves.

Adrian: NAPLAN is the King Kong of education policy because it started off relatively harmless on this little island and now it’s ripping down buildings and swatting away airplanes. I mean it’s just become this dominant thing in public discourse around education.    

Rob: Let’s not get naïve about how people are using it [NAPLAN]. People use the data in a whole range of ways. It’s not that it’s good on one side and bad on the other … now if we want to, we could take the data away, or we could actually say, ‘let’s have a more complete discussion about it’ … give parents the respect they deserve, I do not accept that there’s a whole bunch of parents out there choosing schools on the basis of NAPLAN results.

Glenn: To finish tonight, I want to pose a final ‘big sky’ question. The question is: If you had the power to change one thing about how the politics of evidence, expertise or influence work in Australian schooling policy, what would that be?

Bob: I would want to give emphasis to valuing teacher professional judgment within the use of data and have that as a central element rather than having the data driving.

Adrian: I would make it a legal requirement that systems and governments have to put the interests of child ahead of the interests of adults in education policy.

Jess: I think I’m going to give a sociologist’s answer, which is to say that I think what I would want to see is greater political commitment to acknowledging the actual power that is held in the current production of data and the strategic use of that. The discussion also needs to address the ethical and political dimensions of education and schooling beyond what data can tell us.

Rob: I would like to pursue the argument about increasing the respect and nature, the acknowledgment of, and the expectation of, the profession … I think there is a whole bunch of teachers out there who do a fantastic job … given their fundamental importance to the community, to the wellbeing of this country going forward I’d be upping the ante for the respect for and expectation of teachers.

See the full video here

Glenn C. Savage is a senior lecturer in education policy and sociology of education at the University of Western Australia. His research focuses on education policy, politics and governance at national and global levels, with a specific interest in federalism and national schooling reform. He currently holds an Australian Research Council ‘Discovery Early Career Research Award’ (DECRA) for his project titled ‘National schooling reform and the reshaping of Australian federalism’(2016-2019).