August.19.2019

Artificial intelligence in Schools: An Ethical Storm is Brewing

By Erica Southgate

Artificial intelligence will shape our future more powerfully than any other innovation this century. Anyone who does not understand it will soon find themselves feeling left behind, waking up in a world full of technology that feels more and more like magic.’ (Maini and Sabri, 2017, p.3)

Last week the Australian Government Department of Education released the world-first research report into artificial intelligence and emerging technologies in schools. It is authored by an interdisciplinary team from the University of Newcastle, Australia.

As the project lead, and someone interested in carefully incubating emerging technologies in educational settings to develop an authentic evidence-base, I relished the opportunity to explore the often-overlooked ethical aspects of introducing new tech in to schools. To this end, I developed a customised ethical framework designed to encourage critical dialogue and increased policy attention on introducing artificial intelligence into schools.

We used to think artificial intelligence would wheel itself in to classrooms in the sci-fi guise of a trusty robo-instructor (a vision that is unlikely to come true for some time, if ever). What we didn’t envisage was how artificial intelligence would become invisibly infused into the computing applications we use in everyday life such as internet search engines, smartphone assistants, social media tagging and navigation technology, and integrated communication suites.

In this blog post I want to tell you about artificial intelligence in schools, give you an idea of the ethical dilemmas that our educators are facing and introduce you to the framework I developed.

What is AI (artificial intelligence)?

Artificial intelligence is an umbrella term that refers to a machine or computer program that can undertake tasks or activities that require features of human intelligence such as planning, problem solving, recognition of patterns, and logical action.

While the term was first coined in the 1950s, the new millennium marked rapid advancement in AI driven by the expansion of the Internet, availability of ‘Big Data’ and Cloud storage and more powerful computing and algorithms. Applications of AI have benefited from improvements in computer vision, graphics processing, and speech recognition.

Interestingly, adults and children often overestimate the intelligence and capability of machines, so it is important to understand that right now we are in a period of ‘narrow AI’ which is able to do a single or focused task, sometimes in ways that can outperform humans. The diagram below from our report (adapted from an article in The Conversation by Arend Hinz, Michigan State University ‘s Assistant Professor of Integrative Biology & Computer Science and Engineering) provides an overview of types of AI and current state-of-play

AI in education

In education, AI is in some intelligent tutoring systems and powers some pedagogical agents (helpers) in educational software. It can be integrated into the communication suites marketed by Big Tech (for example in email) and will increasingly be part of learning management systems that present predicative and data-driven performance dash boards to teachers and school leaders. There is also some (very concerning) talk of integrating facial recognition technology into classrooms to monitor the ‘mood’ and ‘engagement’ of students despite research suggesting that inferring affective states from facial expression is fraught with difficulties.

Engaging with AI in education also involves an understanding of machine learning (ML), whereby algorithms can help a machine learn to identify patterns in data and make predictions without having pre-programmed models or rules.

Worldwide concern about the ethics of AI and ML

The actual and conceivable ethical implications of AI and ML have been canvassed for several decades. Since 2016, the US, UK and European Union have conducted large scale public inquiries which have grappled with question of what a good and just AI society would look like.

As Umeå University’s Professor of Computing Science, Virginia Dignum, puts it

What does it mean for an AI system to make a decision? What are the moral, societal and legal consequences of their actions and decisions? Can an AI system be held accountable for its actions? How can these systems be controlled once their learning capabilities bring them into states that are possibly only remotely linked to their initial, designed, setup? Should such autonomous innovation in commercial systems even be allowed, and how should use and development be regulated?’

Most pressing ethical issues for education

Some of the most pressing ethical issues related to AI and ML in general, and especially for education include:

AI bias

AI bias where sexist, racist and other forms of discriminatory assumptions are built into the data sets that are used to train machine-learning algorithms that then become baked into AI systems. Part of the problem is the lack of diversity in the computing profession where those that develop AI systems fail to identify the potential for bias or do not adequately test in different populations across the lifecycle of development.

Black box nature of AI systems

The ’black box’, opaque nature of AI systems is complicated. AI is ‘opaque’ because it is often invisibly infused into computing systems in ways that can influence our interactions, decisions, moods and sense of self without us being aware of this.

The ‘black box’ of AI is twofold:  The proprietary nature of AI products creates a situation where industry does not open up the workings of the product and its algorithms for public or third party scrutiny. In cases of deep machine learning there is an autonomous learning and decision-making process which occurs with minimal human intervention, with this technical process being so complicated that even the computer scientists that have created the program cannot fully explain why the machine came to a decision it did.

Digital human rights issues

Digital human rights issues related to the harvesting the ‘Big Data’ used in ML where humans have not given informed consent or where data is used in ways that were not consented to. Issues of consent and privacy extends to the surreptitious collection, storage and sharing of biometric (of the body) data. Biometric data collection represents a threat to the human right to bodily integrity and is legally considered sensitive data that require a very careful and fully justified position before implementation, especially with vulnerable populations such as children.

Deep fakes

We are in a world of ‘deep fakes’ and AI-produced media that ordinary (and even technologist) humans cannot discern as real or machine-generated. This represents a serious challenge and interesting opportunities to teaching and practicing digital literacy. There are even AI programs that produce more than passable written work on any topic.

The potential for a lack of independent advice for educational leaders making decisions on use of AI and ML

Regulatory capture is where those in policy and governance positions (including principals) become dependent on potentially conflicted commercial interests for advice on AI-powered products. While universities may have in-house expertise or the resources to buy-in independent expertise to assess AI products, principals making procurement decisions will probably not be able to do this. Furthermore, it is incumbent on educational bureaucracies to seek independent expert advice and be transparent in their policies and decision-making regarding such procurement so that school communities can have trust that the technology will not do harm through biased systems or by violating teacher and students sovereignty of their data and privacy. 

Our report offers practical advice

Along with our report, the project included infographics on Artificial Intelligence and virtual and augmented reality for students, and ‘short read’ literature reviews for teachers.

In the report we carefully unpack the multi-faceted ethical dimensions of AI and ML for education systems and offer the customised Education, Ethics and AI (EEAI) framework  (below) for teachers, school leaders and policy-makers so that they can make informed decisions regarding design, implementation and governance of AI-powered systems. We also offer a practical ‘worked example’ of how to apply it.

While it is not possible to unpack it all in a blog post, we hope Australian educators can use the report to lead the way in using AI-powered systems for good and for what they are good for.

We want to avoid teachers and students using AI-systems that ‘feel more and more like magic’ and where educators are unable to explain why a machine made a decision that it did in relation to student learning. The very basis of education is being able to make ‘fair calls’ and to transparently explain educational action and, importantly, to be accountable for these decisions.

When we lose sight of this, at a school or school-systems level, we find ourselves in questionable ethical and educational territory. Let’s not disregard our core strength as educators in a rush to appear to be innovative.

We are confident that our report is a good first step in prompting an ongoing democratic process to grapple with ethical issues of AI and ML so that school communities can weather the approaching storm.

Erica Southgate is an Associate Professor of Education at the University of Newcastle, Australia. She believes everyone has the potential to succeed, and that digital technology can be used to level the playing field of privilege. Currently she is using immersive virtual reality (VR) to create solutions to enduring educational and social problems. She believes playing and experimenting with technology can build a strong skill and mind set and every student, regardless of their economic situation, should have access to amazing technology for learning. Erica is lead researcher on the VR School Research Project, a collaboration with school communities. Erica has produced ethical guidelines and health and safety resources for teachers so that immersive VR can be used for good in schools. She can be found on Twitter @EricaSouthgate

For those interested in the full report: Artificial Intelligence and Emerging Technologies in Schools

Republish this article for free, online or in print, under Creative Commons licence.

4 thoughts on “Artificial intelligence in Schools: An Ethical Storm is Brewing

  1. The positive aspect of AI in education is that it will force us to be more transparent in what we do, and challenge our own assumptions and biases. As well as being able to explain how an AI system makes a decision, we need to be able to explain our own decisions.

  2. Erica Southgate says:

    You are correct Tom, transparency is key here. I am looking forward to working with teachers, school leaders and policy-making on how to develop the types of regulations, processes and systems that can build trust in the technology.

  3. Brian Cambourne says:

    Thanks for the link to the report. I look forward to reading it. Th implication for how we try to teach critical literacy are profound, especially with respect the effect that “Deep Fakes” can have on democratic processes.

  4. Erica Southgate says:

    Absolutely Brian. Its very interesting that Adobe, for example, have software that can easily produce a deep fake, but they also have AI to detect this. Will we need to rely on AI to tell us when AI has produced a deepfake? It’s a reality and conundrum.

Comments are closed.

Discover more from EduResearch Matters

Subscribe now to keep reading and get access to the full archive.

Continue reading