CATEGORIES
August.29.2022

Is the NAPLAN results delay about politics or precision?

By Greg Thompson

The decision announced yesterday by ACARA to delay the release of preliminary NAPLAN data is perplexing. The justification is that the combination of concerns around the impact of COVID-19 on children, and the significant flooding that occurred across parts of Australia in early 2022 contributed to many parents deciding to opt their children out of participating in NAPLAN. The official account explains:

“The NAPLAN 2022 results detailing the long-term national and jurisdictional trends will be released towards the end of the year as usual, but there will be no preliminary results release in August this year as closer analysis is required due to lower than usual student participation rates as a result of the pandemic, flu and floods.”

The media release goes on to say that this decision will not affect the release of results to schools and to parents, which have historically occurred at similar times of the year. The question that this poses, of course, is why the preliminary reporting of results is affected, but student and school reports will not be. The answer is likely to do with the nature of the non-participation. 

The most perplexing part of this decision is that NAPLAN has regularly had participation rates below 90% at various times among various cohorts. That has never prevented preliminary results being released before.

What are the preliminary results?

Since 2008, NAPLAN has been a controversial feature of the Australian school calendar for students in Years 3, 5, 7 and 9. The ‘pencil-and-paper’ version of NAPLAN was criticised for how statistical error impacts its precision at the student and school level (Wu, 2016), the impact that NAPLAN has had on teaching and learning (Hardy, 2014), and the time it takes for the results to come back (Thompson, 2013). Since 2018, NAPLAN has gradually shifted to an online, adaptive design which ACARA claims “are better targeted to students’ achievement levels and response styles meaning that the tests “provide more efficient and precise estimates of students’ achievements than do fixed form paper based tests. 2022 was the first year that the tests were fully online. 

NAPLAN essentially comprises four levels of reporting. These are student reports, school level reports, preliminary national reports and national reports. The preliminary reports are usually released around the same time as the student and school results. They report on broad national and sub-national trends, including average results for each year level in each domain across each state and territory and nationally. Closer to the end of the year, a National Report is released which contains deeper analysis on how characteristics such as gender, Indigenous status, language background other than English status, parental occupation, parental education, and geolocation impact achievement at each year level in each test domain.

Participation rates

The justification given in the media release concerns participation rates. To understand this better, we need to understand how participation impacts the reliability of test data and the validity of inferences that can be made as a result (Thompson, Adie & Klenowski, 2018). NAPLAN is a census test. This means that in a perfect world, all students in Years 3, 5, 7 & 9 would sit their respective tests. Of course, 100% participation is highly unlikely, so ACARA sets a benchmark of 90% for participation. Their argument is that if 90% of any given cohort sits a test we can be confident that the results of those sitting the tests are representative of the patterns of achievement of the entire population, even sub-groups within that population. ACARA calculates the participation rate as “all students assessed, non-attempt and exempt students as a percentage of the total number of students in the year level”. Non-attempt students are those who were present but either refused to sit the test or did not provide sufficient information to estimate an achievement score. Exempt students are those exempt from  one or more of the tests on the grounds of English language proficiency or disability.

The challenge, of course, is that non-participation introduces error into the calculation of student achievement. Error is a feature of standardised testing, it doesn’t mean mistakes in the test itself, it rather is an estimation of the various ways that uncertainty emerges in predicting how proficient a student is in an entire domain based on a relatively small sample of questions that make up a test. The greater the error, the less precise (ie less reliable) the tests are. With regards to participation, the greater the non-participation, the more uncertainty is introduced into that prediction. 

The confusing thing in this decision is that NAPLAN has regularly had participation rates below 90% at various times among various cohorts. This participation data can be accessed here.  For example, in 2021 the average participation rates for Year 9 students were slightly below the 90% threshold in every domain yet this did not impact the release of the Preliminary Report. 

Table 1: Year 9 Participation in NAPLAN 2021 (generated from ACARA data)

These 2021 results are not an anomaly, they are a trend that has emerged over time. For example, in pre-pandemic 2018 the jurisdictions of Queensland, South Australia, ACT and Northern Territory did not reach the 90% threshold in any of the Year 9 domains. 

Table 2: Year 9 Participation in NAPLAN 2018 (generated from ACARA data)

Given these results above, the question remains why has participation affected the reporting of the 2022 results, but Year 9 results in 2018, or 2021, were not similarly affected?

At the outset, I am going to say that there is a degree of speculation in answering this question. Primarily, this is because even if participation declines to 85%, this is still a very large sample with which to predict the achievement of the population in a given domain, so it must be that something has not worked when they have tried to model the data. I am going to suggest three possible reasons:

  1. The first is likely, given that it is hinted at in the ACARA press release. If we return to the relationship between participation, error and the validity of inferences, the most likely way that an 85% participation rate could be a problem is if non-participation is not randomly spread across the population. If non-participation was shown to be systematic, that is it is heavily biassed to particular subgroups, then depending upon the size of that bias, the ability to make valid inferences about achievement in different jurisdictions or amongst different sub-groups could be severely impacted. One effect of this is that it might become difficult to reliably equate 2022 results with previous years. This could explain why lower than 90% Year 9 participation in 2021 was not a problem – the non-participation was relatively randomly spread across the sub-groups.
  2. Second, and related to the above, is that the non-participation has something to do with the material and infrastructural requirements for an online test that is administered to all students across Australia. There have long been concerns about the infrastructure requirements of NAPLAN online such as access to computers, reliable internet connections and so on particularly in regional and remote areas of Australia. If these were to influence results, such as through an increased number of students unable to attempt the test, this could also influence the reliability of inferences amongst particular sub-groups. 
  3. The final possibility is political. It has been obvious for some time that various Education Ministers have become frustrated with aspects of the NAPLAN program. The most prominent example of this was the concern expressed by the Victorian Education Minister in 2018 about the reliability of the equation of the online and paper tests. (Education chiefs have botched Naplan online test, says Victoria minister | Australian education | The Guardian) During 2018, ACARA were criticised for showing a lack of responsible leadership in releasing results that seemed to show a mode effect, that is, a difference between students that sat the online vs the pen and paper test not related to their capacity in literacy and numeracy. It may be that ACARA has grown cautious as a result of the 2018 ministerial backlash and feel that any potential problems with the data need to be thoroughly investigated before jurisdictions are named and shamed based on their average scores. 

Ultimately, this leads us to perhaps one of the more frustrating things, we may never know. Where problems emerge around NAPLAN, the tendency is for ACARA and/or the Federal Education Minister to whom ACARA reports, to try to limit criticism by denying access to the data. In 2018, at the height of the controversy of the differences between the online and pencil and paper modes, I formed a team with two internationally eminent psychometricians to research whether there was a mode effect between the online and pencil and paper versions of NAPLAN. The request to ACARA to access the dataset was denied with the words that ACARA could not release item level data for the 2018 online items, presumably because they were provided by commercial entities. In the end, we just have to trust ACARA that there was not one. If we have learnt anything from recent political scandals, perfect opaqueness remains a problematic governance strategy.

Greg Thompson is a professor in the Faculty of Creative Industries, Education & Social Justice at the Queensland University of Technology. His research focuses on the philosophy of education and educational theory. He is also interested in education policy, and the philosophy/sociology of education assessment and measurement with a focus on large-scale testing and learning analytics/big data.

Republish this article for free, online or in print, under Creative Commons licence.

Discover more from EduResearch Matters

Subscribe now to keep reading and get access to the full archive.

Continue reading