ChatGPT

ChatGPT: What do we know now? What must we learn next?

I was honoured to join a TEQSA/CRADLE panel last week, the third in a series on the implications of ChatGPT (or GenAI more broadly) for higher education. In the second panel in March, I flagged the absence (at that early stage) of any evidence about whether students have the capacity to engage critically with ChatGPT. So many people were proposing to do interesting, creative things with students — but we didn’t know how it would turn out.

But three months on, we now have:

* myriad demos of GPT’s capabilities given the right prompts

* a few systematic evaluations of that capability

* myriad proposals for how this can enable engaging student learning and a small but growing stream of educators’ stories from the field with peer reviewed research about to hit the streets.

I also urge us to harness the diverse brilliance of our student community in navigating this system shock, sharing what we’re learning from our Student Partnership in AI. 

The following is an edited transcript of my presentation to the panel.

Educators are now beginning to be able to articulate what ChatGPT literacy looks like,  they have a sense of what the range of ability is within their cohort, and they’re beginning to gain insights into how to better scaffold their students. So for example, I’ve been talking to some of my most exciting colleagues here at UTS, asking them to tell me, how are you using ChatGPT? And in particular, what’s the capacity of your students to engage critically with its output? That is something we hear a lot about all the time. Three months ago, we really couldn’t articulate what that looked like. Now we can. So let me give you four glimpses of what some of my colleagues were saying to me. 


Antonette Shibani  teaches applied NLP to data science master’s students – they have to write a critical summary and a visual map of ethical issues using NLP. They’re encouraged to use ChatGPT for various purposes and to reflect on their use of it, and how useful it was. So the most able students, she tells me, could engage in deep conversations with AI, and they were using excellent prompts and follow up replies to the agent, whereas the less able students were using simple prompts to access content they didn’t have deeper discussions with the AI.


Here’s Baki Kocaballi, teaching interaction design, and the students are using GPT to develop user personas and scenarios, ideating solutions and reflecting critically. The most able students were doing this, they were generating rich scenarios with ChatGPT. Yet he was not seeing any critical reflection on what made an AI output an appropriate or accurate response. And Baki is reflecting that this may be something to do with the subjective nature of design practice. The less able students could still get some good responses but not much good reflection going on. And he notes that he needs to provide more scaffolding and examples for the students. So we see this this professional learning as well amongst the teachers. 


Here’s Anna Lidfors Lindqvist, training student teams to work together to build robots, and again encouraging their use of ChatGPT and reflecting critically on it. The most able students could use it in quite fluent and effective ways. But the less able students  she notes, they’re  not really validating and checking GPT’s calculations. They’re struggling to apply information in the context of the project. Some actually just dropped GPT altogether. It’s just too much work to actually get it to do anything useful.


And a final example from Behzad Fatahi, teaching Yr 2-3 students, they’re using ChatGPT but they’re also using a simulation tool called Plexus to analyze soil structure interaction problems. The most engaged students were behaving as shown, but the least engaged students were struggling and were behaving like this. So, the point is not so much the details — the point is that our academics are starting to know, what does good look like? What can I expect from my students? There is clearly a diversity of ability to engage critically with a conversational, generative AI.

And when you step back from these particular examples and asked again, what are going to be the foundational concepts and evidence as it grows, around what we could call generative AI literacy, for learning, not for marketing, not for any other purposes that can be useful — for learning.

Conversational agents are not new in the world of education. They’ve been around in the AI and research literature for donkey’s years, but used well, they should move us towards more dialogical learning and feedback. So we’re all used to thinking about student peer feedback, learning designs, they’re now going to be interacting with agents. Those agents will be interacting with them and potentially with other agents as well, playing different roles and we will learn how to orchestrate these agents and define the roles they need to play. 

And every turn in this conversation is a form of feedback. The question is what move does the student make next? How do they engage with that feedback from humans and machines?

Now, we have concepts and evidence from pre-generative AI research around this. We have concepts such as student feedback literacy, and we have been taking inspiration from that and talking about student automated feedback literacy now. There is the notion of teacher feedback literacy as well, and similarly, we’re working on teacher automated feedback literacy. So these are powerful concepts I think, for helping us think about how we can study and equip students to engage in powerful learning conversations.

The final point I want to make is we need to work with our students.

We’ve been working with our Students’ Association here at UTS. We had over 150 applicants for a workshop where we took a stratified sample of 20. They engaged in pre-workshop readings where we presented them with a range of dilemmas involving ChatGPT and Turnitin, took part in online discussions and had a face to face workshop. They got briefings from UTS experts introducing generative AI, explaining how it’s being used creatively at UTS, such as the examples I just showed you,alking about the ethical issues around generative AI and talking about Turnitin what do we know about it? And should we turn it on? That is a decision we’re trying to make at UTS at the moment.reakout groups, a plenary discussion and we have a report currently under review by the students as to whether they have they’re happy with that as a summary of what they talked about.

But let me just share three examples of what they told us and you’ll see some echoes here with what we heard from Rowena Harper earlier. 

  • Firstly, they are asking, please equip us to use ChatGPT for learning. We are aware that it could actually undermine our learning if we don’t use it well, but what does that mean? You’re the educators — you should be able to tell us how to use it effectively for learning and not in a way that torpedoes our learning. 
  • Secondly, can we have more assessments, integrating ChatGPT in sensible ways. They were very excited to see the examples such as the ones I showed you because not all of them have experienced that yet. And finally, Turnitin. Well, yes, it may have a role to play as part of an overall approach to academic integrity. But please handle with care. If there are any questions about our academic integrity, we want to be invited for a respectful conversation, and not be accused of misconduct when, as we are already hearing, Turnitin is backing off from some of its original claims about how good its software is. It’s a very fast-moving arms race. 

So just to wrap up,  here are three questions about what we need to learn next. 

  • What do we mean by generative AI literacy and how do we scaffold it? 
  • How well do generative AI learning designs translate across contexts? They may look very promising, but we have to actually deploy those and study them in context. 
  • And finally, how are we going to engage our students in codesigning this radical shift with us? We talk a lot about diversity of voices and the design of AI. We absolutely need them on board, trusting the way we’re using this technology, seeing that we’re using it responsibly and ethically, and bringing the perspectives that they have. They’re the ones on the receiving end of all this policy we’re talking about.

Simon Buckingham-Shum is the director of the Connected Intelligence Centre at the University of Technology Sydney. He has a career-long fascination with the potential of software to make thinking visible. His work sits at the intersection of the multidisciplinary fields of Human-Computer Interaction, Educational Technology, Hypertext, Computer-Supported Collaboration and Educational Data Science (also known as Learning Analytics).

A new sheriff is coming to the wild ChatGPT west

You know something big is happening when the CEO of Open AI, the creators of ChatGPT, starts advocating for “regulatory guardrails”. Sam Altman testified to the US Senate Judiciary Committee this week that the potential risks for misuse are significant, echoing other recent calls by former Google pioneer, the so-called “godfather of AI”, Geoffrey Hinton.

In contrast, teachers continue to be bombarded with a dazzling array of possibilities, seemingly without limit – the great plains and prairies of the AI “wild west”! One estimate recently made the claim “that around 2000 new AI tools were launched in March” alone!

Given teachers across the globe are heading into end of semester, or end of academic year, assessment and reporting, the sheer scale of new AI tools is a stark reminder that learning, teaching, assessment, and reporting are up for serious discussion in the AI hyper-charged world of 2023. Not even a pensive CEO’s reflection or an engineer’s growing concern has tempered expansion.

Until there is some regulation, proliferation of AI tools –  and voices spruiking their merits – will continue unabated. Selecting and integrating AI tools will remain contextual and evaluative work, regardless of regulation. Where does this leave schoolteachers and tertiary academics, and how do we do this with 2000 new tools in one month (is it even possible)?!?!

Some have jumped for joy and packed their bags for new horizons; some have recoiled in terror and impotence, bunkering down in their settled pedagogical “back east”. 

As if this was not enough to deal with, Columbia University undergraduate, Owen Terry, last week staked the claim that students are not using ChatGPT for “writing our essays for us”. Rather, they are breaking down the task into components, asking ChatGPT to analyse and predict suggestions for each component. They then use ideas suggested by ChatGPT to “modify the structure a bit where I deemed the computer’s reasoning flawed or lackluster”. He argues this makes detection of using ChatGPT “simply impossible”. 

It seems students are far savvier about how they use AI in education than we might give them credit, suggests Terry. They are not necessarily looking for the easy route but are engaging with the technology to enhance their understanding and express their ideas. They’re not looking to cheat, just collate ideas and information more efficiently.

Terry challenges us as educators and researchers to think that we might be underestimating the ethical desire for students to be more broadly educated, rather than automatons serving up predictive banality. His searing critique with how we are dealing with our “tools” is blunt – “very few people in power even understand that something is wrong…we’re not being forced to think anymore”. Perhaps contrary to how some might view the challenge, Terry suggests we might even:

need to move away from the take-home essay…and move on to AI-proof assignments like oral exams, in-class writing, or some new style of schoolwork better suited to the world of artificial intelligence.

The urgency of “what do I do with the 2000 new AI apps” seems even greater. These are only the ones released during March. Who knows how many will spring up this month, or next, or by the end of 2023? Who knows how long it will take partisan legislators to act, or what they will come up with in response? Until then, we have to make our own map.

Some have offered a range of educational maps based on alliterative Cs – 4Cs, 6Cs – so here’s a new 4Cs about how we might use AI effectively while we await legislators’ deliberations:

Curation – pick and choose apps which seem to serve the purpose of student learning. Avoid popularity or novelty for its own sake. In considering what this looks like in practice, it is useful to consider the etymology of the word curation which comes from the Latin word, cura, ‘to take care of.’ Indeed, if our primary charge is to educate from a holistic perspective, then consideration must be extended to our choice of AI or apps that will serve their learning needs and engagement.

The fostering of innate curiosity means being unafraid to trial things for ourselves and with and for our students. But this should not be to the detriment of the intended learning outcomes, rather to ensure they align more closely. When curating AI, be discerning in whether it adds to the richness of student learning.

Clarity – identify for students (and teachers) why any chosen app has educative value. It’s the elevator pitch of 2023 – if you can’t explain to students its relevance in 30 seconds, it’s a big stretch to ask them to be interested. With 2000 new offerings in March alone, the spectres of cognitive load theory and job demands-resources theory loom large.

Competence – don’t ask students to use it if you haven’t explored it sufficiently. Maslow’s wisdom on “having a hammer and seeing every problem as a nail”  resonates here. Having a hammer might mean I only see problems as nails, but at least it helps if I know how to use the hammer properly! After all, how many educators really optimise the power, breadth, and depth of Word or Excel…and they’ve been around for a few years now. The rapid proliferation makes developing competence in anything more than just a few key tools quite unrealistic. Further, it is already clear that skills in prompt engineering need to develop more fully in order to maximise AI usefulness. 

Character – Discussions around AI ethical concerns—including bias in datasets, discriminatory output, environmental costs, and academic integrity—can shape a student’s character and their approach to using AI technologies. Understanding the biases inherent in AI datasets helps students develop traits of fairness and justice, promoting actions that minimise harm. Comprehending the environmental impact of AI models fosters responsibility and stewardship, and may lead to both conscientious use and improvements in future models. Importantly for education, tackling academic integrity heightens students’ sense of honesty, accountability, and respect for others’ work. Students have already risen to the occasion, with local and international research capturing student concerns and their beliefs about the importance of learning to use these technologies ethically and responsibly. Holding challenging conversations about AI ethics prepares students for ethically complex situations, fostering the character necessary in the face of these technologies.

Launching these 4Cs is offered in the spirit of the agile manifesto undergirding development of software over the last twenty years – early and continuous delivery and deliver working software frequently. The rapid advance from ChatGPT3, to 3.5, and to 4 shows the manifesto remains a potent rallying call. New iterations of these 4Cs for AI should similarly invite critique, refinement, and improvement.

L to R: Dr Paul Kidson is Senior Lecturer in Educational Leadership at the Australian Catholic University, Dr Sarah Jefferson is Senior Lecturer in Education at Edith Cowan University, Leon Furze is a PhD student at Deakin University researching the intersection of AI and education.