That’s a question we can all ask ourselves as we interrogate the UN International Day of Education. This year’s theme is AI and education. What does the teaching profession gain and lose with Gen AI?
Two hundred million people use ChatGPT each month, with growth doubling in one year. A recent article from Harvard Business Review correctly identifies that generative AI (Gen AI) is a prediction machine that can summarise, synthesise, code, and draw based on its training with the corpus of knowledge from the internet and custom data sets. The article points out that: “The efficacy of predictions is contingent on the underlying data. The quality and quantity of data significantly impact the accuracy of AI predictions…. (T)he successful implementation of AI requires good judgment… It involves knowing which predictions to make, the costs associated with different types of mistakes, and what to do with the AI’s outcome… Judgment over what matters in a particular situation is fundamental to the successful use of generative AI.”
Time to ask those questions about AI again
Anecdote and research suggest that students in schools and universities increasingly use Gen AI tools in various ways to undertake learning and assessment. There has been a flurry of activity by government, state departments and regulators in providing policy, guidelines and resources for educators and students on the technology. Discourse has seemed to turn a corner from using Gen AI as “cheating” to either adjusting assessment by having students apply or adapt AI text to the real world or embracing AI outputs by critiquing or improving on them.
Two years after the widespread uptake of ChatGPT, the most popular Gen AI tool, I think it is time to pause and re-ask ourselves as educators what exactly do my students need to know and be able to do to demonstrate competency in learning. This question is the true core of curriculum. And it goes directly to thinking creatively or innovatively in education.
There is a lot of talk about Gen AI augmenting human intelligence with its efficient summary outputs. As the Harvard Business review article points out, such outputs require “good judgment” in order to assess the quality for a “particular situation”.
Still a place for old-fashioned exams
Educators can certainly set tasks where students generate such outputs and develop skills to assess output quality. Of course this means explicitly teaching those quality assessment skills (research, information literacy, critical thinking) and then having some way of knowing if students are using/developing the skills without using Gen AI to produce a fake trail of skill development. This might involve seeing drafts of work with commentary on how the skills were used with real time presentations on this coupled with teacher and peer questions. There is still a place for old fashioned exams too as one way to assess knowledge acquisition and transfer, as unpopular as this may be in some AI evangelical circles.
If we want students to be more than adept prompt jockeys, then we have to really think about how we want them to demonstrate learning.
Software that purports to provide AI generator matches are pretty hopeless and give warning about this, so teachers shouldn’t rely on these but on carefully developed dialogue and iterative processes with students. In other words, carefully crafted learning and assessment activities and knowing their students well. This is easier in schools than in universities where large cohorts, online learning, intensified academic workloads and a highly casualised workforce act as barriers to developing genuine, long term educator-student connections.
On standards
Now, let me unpack the issue from my context as a teacher educator. Australian teachers have a set of standards they need to meet at various career stages. The curriculum of teacher education needs to directly respond to these standards and teacher education is commonly structured according to : (1) content (discipline) knowledge, (2) method which is curriculum, pedagogy and assessment, (3) understanding learning and the learning context of students (educational psychology and sociology), and (4) how 1-3 translate into practice through professional experience known as practicums and internships. Summarising and synthesising from the corpus of the internet, Gen AI can easily produce outputs for assessment related to areas 1- 3.
Teachers are great sharers. It is a human-centred, collaborative profession after all. For a long time teachers everywhere have shared curriculum scope and sequence documents, unit and lesson plans, assessments, teaching resources, and student work samples online. Student teachers, usually referred to as pre-service teachers, have a vast repository of exemplars and resources to draw on, modify and use for assessment at university and at practicum. Plagiarism checkers could and can still identify if a student has directly copied something from the internet and not cited the source or tried to pass it off as their own.
Questioning the quality of the AI output
Gen AI, drawing on all the teaching curriculum resources available online, can almost instantly produce scope and sequence, units of work, lesson plans and resources such as work sheets, by predicting what the user wants according to the prompt and synthesising or summarising what is available online. It is then up to the pre-service teacher or teacher to make judgements about the quality of the AI output in relation to the task or the appropriateness for the learning context.
Teachers who have gone through traditional method courses at university – learning to first read a syllabus for structure and meaning, and then translating this into a lesson plan, a unit of work and a scope and sequence through a carefully scaffolded developmental arrangement of courses across a degree – are mostly well equipped to make professional judgements about automated outputs from gen AI. However, we are entering a new era where it may be possible to produce work for discipline, method and learning courses without having to think critically or authentically about what is submitted for assessment.
There will be a sizeable proportion of students graduating from universities who would have relied on Gen AI outputs in an expedient or shallow way to get through their degree having been exposed to limited opportunities that “test” depth of understanding, application and transfer and creative or innovative thinking. Universities won’t want to talk about this for a long time – just as they were slow to address the impact of essay mills. But it will be a phenomenon which will shape trust in higher education institutions and ultimately professions.
In teacher education this could mean a heavier burden for teachers supervising students on practicum. In the world of Gen AI these supervising teachers are well placed to evaluate whether a student has developed competency through their application of discipline, curriculum and pedagogy, and learner knowledge.
There are many, many teachers using Gen AI to generate curriculum material, school reports, newsletters and other artefacts considered ripe for an efficiency overall in their time-poor day. If Gen AI were to cease tomorrow, I would hazard a guess that the vast majority could still create these texts as they have gone through the sequenced training prior to and in-service, and have experience to draw on, including the experiences of other teachers.
However, we may be entering an era where there will be the first cohorts of teachers who have come to rely on Gen AI to a point that they did not develop these skills or the necessary judgement vital in designing curriculum to suit context. Gen AI raises a lot of questions related to professional knowledge and standards.
Will pre-service and practising teachers develop AI dependency? Will this erode the unique combination of professional skills teachers have? Does this matter? Should we augment our competencies and intelligence and redefine the fundamentals of professional knowledge?
AI: it’s about what exists, not what’s possible
Finally, what will happen to innovation in curriculum design if pre-service and in-service teachers slowly stop drawing on their vast cognitive resources to create and share new unit plans or teaching resources, instead relying on the quick Gen AI fix? We need to remember that Gen AI is a summarising and synthesising tool, predicting a response from a prompt to communicate what already exists not what is possible.
Let’s start having a more serious and sustained conversation in teacher education and the teaching profession about what we gain as educators in using Gen AI and what we potentially erode, lose or irrevocably change, and will it matter for our students?
To return to my original question but orienting it towards the training of pre-service teachers – what exactly do pre-service teachers need to know and be able to do to demonstrate competency with and without Gen AI? This question surely goes to the heart of teaching standards.

Erica Southgate is an associate professor in the School of Education, University of Newcastle. She makes computer games for literacy and is an education technology ethicist and an immersive learning researcher.
Totally agree – an excellent analysis – needs to be pondered on and acted on inpolicy and practice