June.15.2023

ChatGPT: What do we know now? What must we learn next?

By Simon Buckingham-Shum

I was honoured to join a TEQSA/CRADLE panel last week, the third in a series on the implications of ChatGPT (or GenAI more broadly) for higher education. In the second panel in March, I flagged the absence (at that early stage) of any evidence about whether students have the capacity to engage critically with ChatGPT. So many people were proposing to do interesting, creative things with students — but we didn’t know how it would turn out.

But three months on, we now have:

* myriad demos of GPT’s capabilities given the right prompts

* a few systematic evaluations of that capability

* myriad proposals for how this can enable engaging student learning and a small but growing stream of educators’ stories from the field with peer reviewed research about to hit the streets.

I also urge us to harness the diverse brilliance of our student community in navigating this system shock, sharing what we’re learning from our Student Partnership in AI. 

The following is an edited transcript of my presentation to the panel.

Educators are now beginning to be able to articulate what ChatGPT literacy looks like,  they have a sense of what the range of ability is within their cohort, and they’re beginning to gain insights into how to better scaffold their students. So for example, I’ve been talking to some of my most exciting colleagues here at UTS, asking them to tell me, how are you using ChatGPT? And in particular, what’s the capacity of your students to engage critically with its output? That is something we hear a lot about all the time. Three months ago, we really couldn’t articulate what that looked like. Now we can. So let me give you four glimpses of what some of my colleagues were saying to me. 


Antonette Shibani  teaches applied NLP to data science master’s students – they have to write a critical summary and a visual map of ethical issues using NLP. They’re encouraged to use ChatGPT for various purposes and to reflect on their use of it, and how useful it was. So the most able students, she tells me, could engage in deep conversations with AI, and they were using excellent prompts and follow up replies to the agent, whereas the less able students were using simple prompts to access content they didn’t have deeper discussions with the AI.


Here’s Baki Kocaballi, teaching interaction design, and the students are using GPT to develop user personas and scenarios, ideating solutions and reflecting critically. The most able students were doing this, they were generating rich scenarios with ChatGPT. Yet he was not seeing any critical reflection on what made an AI output an appropriate or accurate response. And Baki is reflecting that this may be something to do with the subjective nature of design practice. The less able students could still get some good responses but not much good reflection going on. And he notes that he needs to provide more scaffolding and examples for the students. So we see this this professional learning as well amongst the teachers. 


Here’s Anna Lidfors Lindqvist, training student teams to work together to build robots, and again encouraging their use of ChatGPT and reflecting critically on it. The most able students could use it in quite fluent and effective ways. But the less able students  she notes, they’re  not really validating and checking GPT’s calculations. They’re struggling to apply information in the context of the project. Some actually just dropped GPT altogether. It’s just too much work to actually get it to do anything useful.


And a final example from Behzad Fatahi, teaching Yr 2-3 students, they’re using ChatGPT but they’re also using a simulation tool called Plexus to analyze soil structure interaction problems. The most engaged students were behaving as shown, but the least engaged students were struggling and were behaving like this. So, the point is not so much the details — the point is that our academics are starting to know, what does good look like? What can I expect from my students? There is clearly a diversity of ability to engage critically with a conversational, generative AI.

And when you step back from these particular examples and asked again, what are going to be the foundational concepts and evidence as it grows, around what we could call generative AI literacy, for learning, not for marketing, not for any other purposes that can be useful — for learning.

Conversational agents are not new in the world of education. They’ve been around in the AI and research literature for donkey’s years, but used well, they should move us towards more dialogical learning and feedback. So we’re all used to thinking about student peer feedback, learning designs, they’re now going to be interacting with agents. Those agents will be interacting with them and potentially with other agents as well, playing different roles and we will learn how to orchestrate these agents and define the roles they need to play. 

And every turn in this conversation is a form of feedback. The question is what move does the student make next? How do they engage with that feedback from humans and machines?

Now, we have concepts and evidence from pre-generative AI research around this. We have concepts such as student feedback literacy, and we have been taking inspiration from that and talking about student automated feedback literacy now. There is the notion of teacher feedback literacy as well, and similarly, we’re working on teacher automated feedback literacy. So these are powerful concepts I think, for helping us think about how we can study and equip students to engage in powerful learning conversations.

The final point I want to make is we need to work with our students.

We’ve been working with our Students’ Association here at UTS. We had over 150 applicants for a workshop where we took a stratified sample of 20. They engaged in pre-workshop readings where we presented them with a range of dilemmas involving ChatGPT and Turnitin, took part in online discussions and had a face to face workshop. They got briefings from UTS experts introducing generative AI, explaining how it’s being used creatively at UTS, such as the examples I just showed you,alking about the ethical issues around generative AI and talking about Turnitin what do we know about it? And should we turn it on? That is a decision we’re trying to make at UTS at the moment.reakout groups, a plenary discussion and we have a report currently under review by the students as to whether they have they’re happy with that as a summary of what they talked about.

But let me just share three examples of what they told us and you’ll see some echoes here with what we heard from Rowena Harper earlier. 

  • Firstly, they are asking, please equip us to use ChatGPT for learning. We are aware that it could actually undermine our learning if we don’t use it well, but what does that mean? You’re the educators — you should be able to tell us how to use it effectively for learning and not in a way that torpedoes our learning. 
  • Secondly, can we have more assessments, integrating ChatGPT in sensible ways. They were very excited to see the examples such as the ones I showed you because not all of them have experienced that yet. And finally, Turnitin. Well, yes, it may have a role to play as part of an overall approach to academic integrity. But please handle with care. If there are any questions about our academic integrity, we want to be invited for a respectful conversation, and not be accused of misconduct when, as we are already hearing, Turnitin is backing off from some of its original claims about how good its software is. It’s a very fast-moving arms race. 

So just to wrap up,  here are three questions about what we need to learn next. 

  • What do we mean by generative AI literacy and how do we scaffold it? 
  • How well do generative AI learning designs translate across contexts? They may look very promising, but we have to actually deploy those and study them in context. 
  • And finally, how are we going to engage our students in codesigning this radical shift with us? We talk a lot about diversity of voices and the design of AI. We absolutely need them on board, trusting the way we’re using this technology, seeing that we’re using it responsibly and ethically, and bringing the perspectives that they have. They’re the ones on the receiving end of all this policy we’re talking about.

Simon Buckingham-Shum is the director of the Connected Intelligence Centre at the University of Technology Sydney. He has a career-long fascination with the potential of software to make thinking visible. His work sits at the intersection of the multidisciplinary fields of Human-Computer Interaction, Educational Technology, Hypertext, Computer-Supported Collaboration and Educational Data Science (also known as Learning Analytics).

Republish this article for free, online or in print, under Creative Commons licence.

Discover more from EduResearch Matters

Subscribe now to keep reading and get access to the full archive.

Continue reading