AERO

Science and writing: Why AERO’s narrow views are a big mistake

Will narrow instructional models promoted by AERO crowd out quality teaching and learning?

A recent ‘practice guide’ from the Australian Education Research Organisation (AERO), on ‘Writing in Science’ raises significant questions about the peak body’s narrow views on teaching and learning. Is AERO leading us in the wrong direction for supporting teachers to provide a rich and meaningful experience for Australian students?

The guide  explains the nature of simple, compound and complex sentences in science. It  provides student writing with feedback  teachers could provide to improve the writing. There are suggestions for teachers to generate and unpack exemplar sentences and lists of nouns and adjectives, provided by practice exercises. 

Yet a close reading shows these analyses fall well short of best practice in analysing science writing. Further, this advice is missing any comprehensive linguistic account of grammar as resource for meaning in text construction;any critical perspective on the function different kinds of texts to make sense of science, and; any attention to the commitment of teachers of science to developing science ideas. 

We are world leaders

Yet, Australian researchers in literacy are world leaders in thinking about the functions of text in generating meaning across different genres and writing to learn in science

AERO has ignored such research. It  sacrifices what we know about engaging and meaningful teaching and learning practice on the altar of its ideological commitment to impoverished interpretations of explicit teaching. 

While the practice guide is  useful for alerting teachers to the importance of explicit attention to writing in science, it could do better by drawing on our rich research base around meaningful pedagogies –  (which include explicit teaching elements) that engage students and enrich science teachers’ practice.  

This story of ignoring a wealth of sophisticated Australian and international research to enforce a simplistic instructional model is repeated across multiple curriculum areas, including science and  mathematics. AERO’s ‘evidence based’ model of a ‘science of learning’ is based exclusively on studies involving one research methodology. It uses experimental and control conditions that inevitably restrict the range of teaching and learning strategies compared to those found in real classrooms. 

The research findings of the community of Australian and International mathematics and science education researchers who have worked with students and teachers over many decades to establish fresh theoretical perspectives and rich teaching and learning approaches have been effectively silenced. 

What underpins this narrowing?

What underpins this narrowing of conceptions of teaching and learning that seems to have taken the Australian education system by storm? AERO bases its instructional model almost entirely on the theoretical framing of Cognitive Load Theory (CLT), particularly the research of John Sweller who over four decades has established an impressive body of work outlining the repercussions of limitations in working memory capacity. 

Sweller argues that when students struggle to solve complex problems with minimal guidance, they can fail to develop the schema that characterise expert practice. His conclusion is that teachers need to provide ‘worked examples’ that students can follow and practice to achieve mastery, an approach aligned with the ‘I do’, ‘we do’, ‘you do’ advocacy of AERO and the basis of the mandated pedagogy models of both New South Wales and Victoria. 

The argument that students can lose themselves in complexity if not appropriately guided is well taken. But this leap from a working memory problem to the explicit ’worked example’ teaching model fails to acknowledge the numerous ways, described in the research literatures of multiple disciplines, that teachers can support students to navigate complexity. In mathematics and science this includes the strategic setting up of problems, guided questioning and prompting, preparatory guidance, communal sharing of ideas, joint teacher-student text construction, or explicit summing up of schema emerging from students’ solutions. 

What really works

The US National Council of Teachers of Mathematics identifies seven, not one, effective mathematics teaching practices some but not all of which involve direct instruction.  An OECD analysis of PISA-related data identified three dominant mathematics teaching strategies of which direct instruction was the most prevalent and least related to mathematics performance, with active learning and in particular cognitive engagement strategies being more effective. 

Sweller himself (1998) warned against overuse of the worked example as a pedagogy, citing student engagement as an important factor. Given these complexities, AERO’s silencing of the international community of mathematics and science educators seems stunningly misplaced. 

This global mathematics and science education research represents a rich range of learning theories, pedagogies, conceptual and affective outcomes, and purposes. The evidence in this literature overwhelmingly rejects the inquiry/direct instruction binary that underpins the AERO model. Further, the real challenge with learning concepts like force, image formation, probability or fractional operations has less to do with managing memory than with arranging the world to be seen in new ways. 

To be fair, the CLT literature has useful things to say about judging the complexity of problems, and the strong focus on teacher guidance is well taken, especially when the procedures and concepts to be learned are counter-intuitive. However, CLT research has mainly concerned problems that are algorithmic in nature, for which an explicit approach can more efficiently lead to the simple procedural knowledge outcomes involved. 

The short term advantage disappears

Even here, studies have shown that over the long term, the short-term advantage of direct instruction disappears. The real issues involved in supporting learning of complex ideas and practices are deciding when to provide explicit support, and of what type. This is where the teacher’s judgment is required, and it will depend on the nature of the knowledge, and the preparedness of students. To reduce these complex strategies to a single approach is the real offence of the AERO agenda, and of the policy prescriptions in Victoria and NSW. 

It amounts to the de-professionalisation of teachers when such decisions are short-circuited. 

Another aspect of this debate is the claim that a reform of Australian teaching and learning is needed because of the poor performance of students on NAPLAN and on international assessments such as PISA and TIMSS. While it is certainly true that we could do much better in education across all subjects, particularly with respect to the inequities in performance based on socio-economic factors and Indigeneity, our relative performance on international rankings is more complex than claimed

Flies in the face of evidence

To claim this slippage results from overuse of inquiry and problem-solving approaches in science and mathematics flies in the face of evidence. In both subjects, teacher-centred approaches currently dominate. An OECD report providing advice for mathematics teachers based on the 2012 PISA mathematics assessment revealed Australian students ranked ninth globally on self-reporting memorisation strategies, and third-last on elaboration strategies (that is, making links between tasks and finding different ways to solve a problem). The latter strategies indicate the capability to solve the more difficult problems. 

While it may be true some versions of inquiry in school science and mathematics may lack necessary support structures, this corrective of a blanket imposition of explicit teaching is shown by the wider evidence to represent a misguided overreaction. 

How has it happened, that one branch of education research misleadingly characterised as ‘the’ science of learning, together with a narrow and hotly contested view of what constitutes ‘evidence’ in education, has become the one guiding star for our national education research organisation to the exclusion of Australian and international disciplinary education research communities? 

Schools are being framed as businesses

It has been argued AERO ‘encapsulates politics at its heart’ through its embedded links to corporate philanthropy and business relations and a brief to attract funding into education. Indeed, schools are increasingly being bombarded with commercial products. Schools are being framed as businesses. 

The teaching profession over the last decade has suffered concerted attacks from the media and from senior government figures. Are we seeing moves here to systematically de-professionalise teachers and restrict their practice through ‘evidence based’ resources focused on ‘efficient’ learning? Is this what we really want as our key purpose in education? In reality, experienced teachers will not feel restricted by these narrow versions of explicit teaching pedagogies and will engage their students in varied ways. How can they not? 

If the resources now being developed and promoted under the AERO rubric, as with ‘Writing in Science’, follow this barren prescription, we run the danger of a growing erosion of teacher agency and impoverishment of student learning.

We need a richer view of pedagogy

What we need, going forward, is a richer view of pedagogy based on the wider research literature, rather than the narrow base that privileges procedural practices. We need to engage with a more complex and informed discussion of the core purposes of education that is not proscribed by a narrow insistence on NAPLAN and international assessments. We need to value our teaching profession and recognise the complex, relational nature of teaching and learning. Our focus should be on strengthening teachers’ contextual decision making, and not on constraining them in ways that will reduce their professionalism, and ultimately their standing.  

  

Russell Tytler is Deakin Distinguished Professor and Chair of Science Education at Deakin University. He researches student reasoning and learning through the multimodal languages of science, socio scientific issues and reasoning, school-community partnerships, and STEM curriculum policy and practice. Professor Tytler is widely published and has led a range of research projects, including current STEM projects investigating a guided inquiry pedagogy for interdisciplinary mathematics and science. He is a member of the Science Expert Group for PISA 2015 and 2025.

Proactive and preventative: Why this new fix could save reading (and more)

When our research on supporting reading, writing, and mathematics for older – struggling  – students was published last week, most of the responses missed the heart of the matter.

In Australia, we have always used “categories of disadvantage” to identify students who may need additional support at school and provide funding for that support. Yet those students who do not fit neatly into those categories slipped through the gaps, and for many, the assistance came far too late, or achieved far too little. Despite an investment of $319 billion, little has changed with inequity still baked into our schooling system. 

Our systematic review, commissioned by the Australian Education Research Organisation, set out to identify the best contemporary model to identify underachievement and provide additional support – a multi-tiered approach containing three levels, or “tiers” that increase in intensity.

(de Bruin & Stocker, 2021)

We found that if schools get Tier 1 classroom teaching right – using the highest possible quality instruction that is accessible and explicit – the largest number of students make satisfactory academic progress. When that happens, resource-intensive support at Tier 2 or Tier 3 is reserved for those who really need it. We also found that if additional layers of targeted support are provided rapidly, schools can get approximately 95% of students meeting academic standards before gaps become entrenched.

The media discussion of our research focused on addressing disadvantages such as intergenerational poverty, unstable housing, and “levelling the playing field from day one” for students starting primary school through early childhood education. 

These are worthy and important initiatives to improve equality in our society, but they are not the most direct actions that need to be taken to address student underachievement. Yes, we need to address both, BUT the most direct and high-leverage approach for reducing underachievement in schools is by improving the quality of instruction and the timeliness of intervention in reading, writing, and mathematics.

Ensuring that Tier 1 instruction is explicit and accessible for all students is both proactive and preventative. It means that the largest number of students acquire foundational skills in reading, writing, and mathematics in the early years of primary school. This greatly reduces the proportion of students with achievement gaps from the outset. 

This is an area that needs urgent attention. The current rate of underachievement in these foundational skills is unacceptable, with approximately 90,000 students failing to meet national minimum standards. These students do not “catch up” on their own. Rather, achievement gaps widen as students progress through their education. Current data show that, on average, one in every five students starting secondary school are significantly behind their peers and have the skills expected of a student in Year 4:

Source

For students in secondary school, aside from the immediate issues of weak skills in reading, writing and mathematics, underachievement can lead to early leaving as well as school failure. Low achievement in reading, writing, and mathematics also means that individuals are more likely to experience negative long-term impacts post-school including aspects of employment and health, resulting in lifelong disadvantage. As achievement gaps disproportionately affect disadvantaged students, this perpetuates and reinforces disadvantage across generations. Our research found that it’s never too late to intervene and support these students. We also highlighted particular practices that are the most effective, such as explicit instruction and strategy instruction.

For too long, persistent underachievement has been disproportionately experienced by disadvantaged students, and efforts to achieve reform have failed. If we are to address this entrenched inequity, we need large-scale systemic improvement as well as improvement within individual schools. Tiered approaches, such as the Multi-Tiered System of Supports (MTSS), build on decades of research and policy reform in the US for just this purpose. These have documented success in helping schools and systems identify and provide targeted intervention to students requiring academic support. 

In general, MTSS is characterised by:

  • the use of evidence-based practices for teaching and assessment
  • a continuum of increasingly intensive instruction and intervention across multiple tiers
  • the collection of universal screening and progress monitoring data from students
  • the use of this data for making educational decisions
  • a clear whole-school and whole-of-system vision for equity

What is important and different about this approach is that support is available to any student who needs it. This contrasts with the traditional approach, where support is too often reserved for students identified as being in particular categories of disadvantage, for example, students with disabilities who receive targeted funding. When MTSS is correctly implemented, students who are identified as requiring support receive it as quickly as possible. 

What is also different is that the MTSS framework is based on the assumption that all students can succeed with the right amount of support. Students who need targeted Tier 2 support receive that in addition to Tier 1. This means that Tier 2 students work in smaller groups and receive more frequent instruction to acquire skills and become fluent until they meet benchmarks. The studies we reviewed showed that when Tiers 1 and 2 were implemented within the MTSS framework, only 5% of students required further individualised and sustained support at Tier 3. Not only did our review show that this was an effective use of resources, but it also resulted in a 70% reduction in special education referrals. This makes MTSS ideal for achieving system-wide improvement in both equity, achievement, and inclusion.

Our research could not be better timed. The National School Reform Agreement (NSRA) is currently being reviewed to make the system “better and fairer”. Clearly, what is needed is a coherent approach for improving equity and school improvement that can be implemented across systems and schools and across states and territories. To this end, MTSS offers a roadmap to achieve these targets, along with some lessons learned from two decades of “getting it right” in the US. One lesson is the importance of using implementation science to ensure MTSS is adopted and sustained at scale and with consistency across states. Another is the creation of national centres for excellence (e.g., for literacy: https://improvingliteracy.org), and technical assistance centres (e.g., for working with data: https://intensiveintervention.org) that can support school and system improvement.

While past national agreements in Australia have emphasised local variation across the states and territories, our research findings highlight that systemic equity-based reform through MTSS requires a consistent approach across states, districts, and schools. Implemented consistently and at scale, MTSS is not just another thing. It has the potential to be the thing that may just change the game for Australia’s most disadvantaged students at last.

From left to right: Dr Kate de Bruin is a senior lecturer in inclusion and disability at Monash University. She has taught in secondary school and higher education for over two decades. Her research examines evidence-based practices for systems, schools and classrooms with a particular focus on students with disability. Her current projects focus on Multi-Tiered Systems of Support with particular focus on academic instruction and intervention. Dr Eugénie Kestel has taught in both school and higher education. She taught secondary school mathematics, science and English and currently teaches mathematics units in the MTeach program at Edith Cowan University. She conducts professional development sessions and offers MTSS mathematics coaching to specialist support staff in primary and secondary schools in WA. Dr Mariko Francis is a researcher and teaching associate at Monash University. She researches and instructs across tertiary, corporate, and community settings, specializing in systems approaches to collaborative family-school partnerships, best practices in program evaluation, and diversity and inclusive education. Professor Helen Forgasz is a Professor Emerita (Education) in the Faculty of Education, Monash University (Australia). Her research includes mathematics education, gender and other equity issues in mathematics and STEM education, attitudes and beliefs, learning settings, as well numeracy, technology for mathematics learning, and the monitoring of achievement and participation rates in STEM subjects. Ms Rachelle Fries is a PhD candidate at Monash University. She is a registered psychologist and an Educational & Developmental registrar with an interest in working to support diverse adolescents and young people. Her PhD focuses on applied ethics in psychology.  

AERO responds to James Ladwig’s critique

AERO’s response is below, with additional comments from Associate Professor Ladwig. For more information about the statistical issues discussed, a more detailed Technical Note is available at AERO.

On Monday, EduResearch Matters published a post by Associate Professor James Ladwig which critiqued the Australian Education Research Office’s Writing development: what does a decade of NAPLAN data reveal? 

AERO’s response is below, with additional comments from Associate Professor Ladwig. 

AERO: This article makes three key criticisms about the analysis presented in the AERO report, which are inaccurate.

Ladwig claims that the report lacks consideration of sampling error and measurement error in its analysis of the trends of the writing scores. In fact, those errors were accounted for in the complex statistical method applied. AERO’s analysis used both simple and complex statistical methods to examine the trends. While the simple method did not consider error, the more complex statistical method (referred to as the ‘Differential Item Analysis’) explicitly considered a range of errors (including measurement error, and cohort and prompt effects).

Associate Professor Ladwig: AERO did not include any of that in its report nor in any of the technical papers. There is no overtime DIF analysis of the full score – and I wouldn’t expect one.  All of the DIF analyses rely on data that itself carries error (more below). There is no way for the educated reader to verify these claims without expanded and detailed reporting of the technical work underpinning this report. This is lacking in transparency, falls shorts of the standards we should expect from AERO and makes it impossible for AERO to be held accountable for its specific interpretation of their own results.

AERO: Criticism of the perceived lack of consideration of ‘ceiling effects’ in AERO’s analysis of the trends of high-performing students’ results, omits the fact that AERO’s analysis focused on the criteria scores (not the scaled measurement scores). AERO used the proportion of students achieving the top 2 scores (not the top score), for each criterion, as the matrix to examine the trends. Given only a small proportion of students achieved a top score for any criterion (as shown in the report statistics), there is no ‘ceiling effect’ that could have biased the interpretation of the trends.

Associate Professor Ladwig made his ‘ceiling effect’ comments while explaining how the NAPLAN writing scores are designed not in relation to the AERO analysis.

AERO: The third major inaccuracy relates to the comments made about the ‘measurement error’ around the NAPLAN bands and the use of adaptive testing to reduce error. These are irrelevant to AERO’s analysis because the main analysis did not use scaled scores, it did not use bands, and adaptive testing is not applicable to the writing assessment.

Associate Professor Ladwig’s comment was about the scaling in relation to explaining the score development, not about the AERO analysis.

In relation to the AERO use of NAPLAN criterion score data in the writing analysis, however, please note that those scores are created either through scorer moderation processes or (increasingly where possible) text interpretative algorithms.  Here again the address of the reliability of these raw scores was absent, but with one declared limitation noted, in AERO’s own terms:

Another key assumption underlying most of the interpretation of results in this report is that marker effects (that is, marking inconsistency across years) are small and therefore they do not impact on the comparability of raw scores over time. (p[.66)

This is where AERO has taken another short cut, with an assumption that should not be made.  ACARA has reported the reliability estimates to include that in the scores analysis.  It is readily possible to report those and use them for trend analyses.

AERO: A final point: the mixed-methods design of the research was not recognised in the article. AERO’s analysis examined the skills students were able to achieve at the criterion level against curriculum documents. Given the assessment is underpinned by a theory of language, we were able to complement quantitative with a qualitative analysis that specifically highlighted the features of language students were able to achieve. This was validated by analysis of student writing scripts.

Associate Professor Ladwig says this is irrelevant to his analysis. The logic of this is also a concern. Using multiple methods and methodologies does not correct for any that are technically lacking.  In relation to the overall point of concern, we have a clear example of an agency reporting statistical results in a manner that elides external scrutiny accompanied by an extreme media positioning. Any of the qualitative insights to the minutia these numbers represent will probably very useful for teachers of writing – but whether or not they are generalisable, big, or shifting depends on those statistical analysis themselves. 

AERO’s writing report is causing panic. It’s wrong. Here’s why.

If ever there was a time to question public investment in developing reports using  ‘data’ generated by the National Assessment Program, it is now with the release of the Australian Educational Research Organisation’s report ‘Writing development: What does a decade of NAPLAN data reveal?’ 

I am sure the report was meant to provide reliable diagnostic analysis for improving the function of schools. 

It doesn’t. Here’s why.

There are deeply concerning technical questions about both the testing regime which generated the data used in the current report, and the functioning of the newly created (and arguably redundant) office which produced this report.

There are two lines of technical concern which need to be noted. These concerns reveal reasons why this report should be disregarded – and why media response is a beatup.

The first technical concern for all reports of NAPLAN data (and any large scale survey or testing data) is how to represent the inherent fuzziness of estimates generated by this testing apparatus.  

Politicians and almost anyone outside of the very narrow fields reliant on educational measurement would like to talk about these numbers as if they are definitive and certain.

They are not. They are just estimates – but all of the summary statistics reports are just estimates.  

The fact these are estimates is not apparent in the current report.  There is NO presentation of any of the estimates of error in the data used in this report. 

Sampling error is important, and, as ACARA itself has noted, (see, eg, the 2018 NAPLAN technical report) must be taken into account when comparing the different samples used for analyses of NAPLAN.  This form of error is the estimate used to generate confidence intervals and calculations of ‘statistical difference’.  

Readers who recall seeing survey results or polling estimates being represented with a ‘plus or minus’ range will recognise sampling error. 

Sampling error is a measure of the probability of getting a similar result if the same analyses were done again, with a new sample of the same size, with the same instruments, etc.  (I probably should point out that the very common way of expressing statistical confidence often gets this wrong – when we say we have X level of statistical confidence, that isn’t a percentage of how confident you can be with that number, but rather the likelihood of getting a similar result if you did it again.)  

In this case, we know about 10% of the population do not sit the NAPLAN writing exam, so we already know there is sampling error.  

This is also the case when trying to infer something about an entire school from the results of a couple of year levels.  The problem here is that we know the sampling error introduced by test absences is not random and accounting for it can very much change trend analyses, especially for sub-populations So, what does this persuasive writing report say about sampling error? 

Nothing. Nada. Zilch. Zero. 

Anyone who knows basic statistics knows that when you have very large samples, the amount of error is far less than with smaller samples.  In fact, with samples as large as we get in NAPLAN reports, it would take only a very small difference to create enough ripples in the data to show up as being statistically significant.  That doesn’t mean, however, the error introduced is zero – and THAT error must be reported when representing mean differences between different groups (or different measures of the same group).

Given the size of the sampling here, you might think it ok to let that slide.  However, that isn’t the only short cut taken in the report.  The second most obvious measure ignored in this report is measurement error.  Measurement error exists any time we create some instrument to estimate a ‘latent’ variable – ie something you can’t see directly.  We can’t SEE achievement directly – it is an inference based on measuring several things we can theoretically argue are valid indicators of that thing we want to measure.  

Measurement error is by no means a simple issue but directly impacts the validity of any one individual student’s NAPLAN score and an aggregate based on those individual results.  In ‘classical test theory’ a measured score is made of up what is called a ‘true score’ and error (+/-).  In more modern measurement theories error can become much more complicated to estimate, but the general conception remains the same.  Any parent who has looked at NAPLAN results for their child and queried whether or not the test is accurate is implicitly questioning measurement error.

Educational testing advocates have developed many very mathematically complicated ways of dealing with measurement error – and have developed new testing techniques for improving their tests.  The current push for adaptive testing is precisely one of those developments, in the local case being rationalised as adaptive testing (where which specific test item is asked of the person being tested changes depending on prior answers) does a better job of differentiation those at the top and bottom end of the scoring range (see the 2019 NAPLAN technical report for this analysis). 

 This bottom/top of the range problem is referred to as a floor or ceiling effect.  When large proportion of students either don’t score anything or get everything correct, there is no way to differentiate those students from each other – adaptive testing is a way of dealing with floor and ceiling effects better than a predetermined set of test items.  This adaptive testing has been included in the newer deliveries of the online form of the NAPLAN test.

Two important things to note. 

One, the current report claims the performance of high ‘performing’ students’ scores has shifted down – despite new adaptive testing regimes obtaining very different patterns of ceiling effect. Second, the test is not identical for all students (they never have been).  

The process used for selecting test items  is based on ‘credit models’ generated by testers. Test items are determined to have particular levels of ‘difficulty’ based on the probability of correct answers being given from different populations and samples, after assuming population level equivalence in prior ‘ability’ AND creating difficulties score for items while assuming individual student ‘ability’ measures are stable from one time period to the next.  That’s how they can create these 800 point scales that are designed for comparing different year levels.

So what does this report say about any measurement error that may impact the comparisons they are making?  Nothing.

One of the ways ACARA and politicians have settled their worries about such technical concerns as accurately interpreting statistical reports is by introducing the reporting of test results in ‘Bands’.  Now these bands are crucial for qualitatively describing rough ranges of what the number might means in curriculum terms – but they come with a big consequence.  Using ‘Band’ scores is known as ‘coarsening’ data – when you take a more detailed scale and summarise it in a smaller set of ordered categories – and that process is known to increase any estimates of error.  This later problem has received much attention in the statistical literature, with new procedures being recommended for how to adjust estimates to account for that error when conducting group comparisons using that data.  

As before, the amount of reporting of that error issue? Nada.

 This measurement problem is not something you can ignore – and yet the current report is worse than careless on this question.

It takes advantage of readers not knowing about it. 

When the report attempts to diagnose which component of the persuasive writing tasks were of most concern, it does not bother reporting that the error for each of the separate measures of those ten dimensions of writing has far more error than the total writing score, simply because the number of marks for each is a fraction of the total.  The smaller the number of indicators, the more error (and less reliability).

Now all of these technical concerns simply raises the question of whether or not the overall findings of the report will hold up to robust tests and rigorous analysis – there is no way to assess that from this report, but there is even bigger reason to question why it was given as much attention as it was.  That is, for any statistician, there is always a challenge to translate the numeric conclusions into some for of ‘real life’ scenario.

To explain why AERO has significantly dropped the ball on this last point, consider its headline claim that year 9 students have had declining persuasive writing scores and somehow representing that as a major new concern.  

First note that the ONLY reporting of this using the actual scale values is a vaguely labelled line graph showing scores from 2011 until 2018 – skipping 2016 since the writing task that year wasn’t for persuasive writing (p 26 of the report has this graph).  Of those year to year shifts, the only two that may be statistically significant, and are readily visible, are from 2011 to 2012, and then again from 2017 to 2018.  Why speak so vaguely? From the report, we can’t tell you the numeric value of that drop, because there is no reporting of the actual number represented in that line graph.  

Here is where the final reality check comes in.  

If this data matches the data reported in the national reports from 2011 and 2018, the named mean values on the writing scale were 565.9 and 542.9 respectively.  So that is a drop between those two time points of 23 points.  That may sound like a concern, but recall those scores are based on 48 marks given for writing.  In other words, that 23 point difference is no more than one mark difference (it could be far less since each different mark carries a different weighting in formulation that 800 scale).  

Consequently, even if all the technical concerns get sufficient address and the pattern still holds, the realistic title of Year 9 claim would be ‘Year 9 students in 2018 NAPLAN writing test scored one less mark than the Year 9 students of 2011.’

Now assuming that 23 point difference has anything to do with the students at all, start thinking about all the plausible reasons why students in that last year of NAPLAN may not have been as attentive to details as they were when NAPLAN was first getting started.   I can think of several, not least being the way my own kids did everything possible to ignore the Year 9 test – since the Year 9 test had zero consequences for them.  

Personally, these reports are troubling for many reasons, inclusive of the use of statistics to assert certainty without good justification, but also because saying student writing has declined belies that obvious fact that is hasn’t been all that great for decades.  This is where I am totally sympathetic to the issues raised by the report – we do need better writing among the general population.  But using national data to produce a report of this calibre, by an agency beholden to government, really does little more than provide click-bait and knee jerk diagnosis from all sides of a debates we don’t really need to have.

James Ladwig is Associate Professor in the School of Education at the University of Newcastle.  He is internationally recognised for his expertise in educational research and school reform.  Find James’ latest work in Limits to Evidence-Based Learning of Educational Science, in Hall, Quinn and Gollnick (Eds) The Wiley Handbook of Teaching and Learning published by Wiley-Blackwell, New York. James is on Twitter @jgladwig

AERO’s response to this post

ADDITIONAL COMMENTS FROM AERO provided on November 9: information about the statistical issues discussed, a more detailed Technical Note is available at AERO.

On Monday, EduResearch Matters published the above post by Associate Professor James Ladwig which critiqued the Australian Education Research Office’s Writing development: what does a decade of NAPLAN data reveal? 

AERO’s response is below, with additional comments from Associate Professor Ladwig. 

AERO: This article makes three key criticisms about the analysis presented in the AERO report, which are inaccurate.

Ladwig claims that the report lacks consideration of sampling error and measurement error in its analysis of the trends of the writing scores. In fact, those errors were accounted for in the complex statistical method applied. AERO’s analysis used both simple and complex statistical methods to examine the trends. While the simple method did not consider error, the more complex statistical method (referred to as the ‘Differential Item Analysis’) explicitly considered a range of errors (including measurement error, and cohort and prompt effects).

Associate Professor Ladwig: AERO did not include any of that in its report nor in any of the technical papers. There is no overtime DIF analysis of the full score – and I wouldn’t expect one.  All of the DIF analyses rely on data that itself carries error (more below). There is no way for the educated reader to verify these claims without expanded and detailed reporting of the technical work underpinning this report. This is lacking in transparency, falls shorts of the standards we should expect from AERO and makes it impossible for AERO to be held accountable for its specific interpretation of their own results.

AERO: Criticism of the perceived lack of consideration of ‘ceiling effects’ in AERO’s analysis of the trends of high-performing students’ results, omits the fact that AERO’s analysis focused on the criteria scores (not the scaled measurement scores). AERO used the proportion of students achieving the top 2 scores (not the top score), for each criterion, as the matrix to examine the trends. Given only a small proportion of students achieved a top score for any criterion (as shown in the report statistics), there is no ‘ceiling effect’ that could have biased the interpretation of the trends.

Associate Professor Ladwig made his ‘ceiling effect’ comments while explaining how the NAPLAN writing scores are designed not in relation to the AERO analysis.

AERO: The third major inaccuracy relates to the comments made about the ‘measurement error’ around the NAPLAN bands and the use of adaptive testing to reduce error. These are irrelevant to AERO’s analysis because the main analysis did not use scaled scores, it did not use bands, and adaptive testing is not applicable to the writing assessment.

Associate Professor Ladwig’s comment was about the scaling in relation to explaining the score development, not about the AERO analysis.

In relation to the AERO use of NAPLAN criterion score data in the writing analysis, however, please note that those scores are created either through scorer moderation processes or (increasingly where possible) text interpretative algorithms.  Here again the address of the reliability of these raw scores was absent, but with one declared limitation noted, in AERO’s own terms:

Another key assumption underlying most of the interpretation of results in this report is that marker effects (that is, marking inconsistency across years) are small and therefore they do not impact on the comparability of raw scores over time. (p[.66)

This is where AERO has taken another short cut, with an assumption that should not be made.  ACARA has reported the reliability estimates to include that in the scores analysis.  It is readily possible to report those and use them for trend analyses.

AERO: A final point: the mixed-methods design of the research was not recognised in the article. AERO’s analysis examined the skills students were able to achieve at the criterion level against curriculum documents. Given the assessment is underpinned by a theory of language, we were able to complement quantitative with a qualitative analysis that specifically highlighted the features of language students were able to achieve. This was validated by analysis of student writing scripts.

Associate Professor Ladwig says this is irrelevant to his analysis. The logic of this is also a concern. Using multiple methods and methodologies does not correct for any that are technically lacking.  In relation to the overall point of concern, we have a clear example of an agency reporting statistical results in a manner that elides external scrutiny accompanied by an extreme media positioning. Any of the qualitative insights to the minutia these numbers represent will probably very useful for teachers of writing – but whether or not they are generalisable, big, or shifting depends on those statistical analysis themselves.