doxing

Cyberabuse: It’s too late – the post has gone viral already

The Albanese government’s proposed legislation to outlaw doxing is a landmark move in Australia’s fight against online harassment and cyber abuse. This new bill, introduced this month, makes it a criminal offence to maliciously share personal data with the intent to cause harm, with penalties of up to seven years in jail. 

Doxing, which involves publicly revealing someone’s personal details without consent, is a growing concern in an era where personal information can be weaponised through digital platforms. Under this legislation, doxing based on attributes such as race, religion, gender identity, or sexuality will carry even harsher penalties, signalling the government’s commitment to protecting Australians from online harm.

Crucial time

This legislation comes at a crucial time. Schools and teachers increasingly face new forms of cyber abuse, particularly fuelled by advancements in artificial intelligence (AI). Deepfake technology, which allows users to create fake images and videos of real people, has led to disturbing incidents in educational settings. In Victoria, for example, several schools have been rocked by cases where students used AI to create fake pornographic images of their teachers. These images, manipulated from photos taken from social media, were circulated among students, devastating the lives and careers of the teachers involved. Many schools have seen incidents occur, forcing teachers to seek mental health support and raising urgent questions about the adequacy of current school policies on cyber abuse.

These incidents are not isolated to Australia. In the United States, a teacher in Baltimore was recently arrested for creating a deepfake audio recording of his principal making racist comments. The hoax, which went viral, resulted in death threats against the principal and serious disruption to the school community. This case, while unique in its specifics, highlights the global reach and implications of AI-driven content creation tools. Teachers are increasingly vulnerable to this kind of targeted abuse, with their professional and personal reputations on the line in a digital world that moves faster than policies and protections can keep up.

How teachers experience cyber abuse

My recent paper, It’s Too Late – The Post Has Gone Viral Already, explores how K-12 teachers are experiencing adult cyber abuse, particularly when content about them goes viral. The paper proposes a novel methodological stance that incorporates trauma-informed qualitative research and aligns with the principles outlined in Australia’s Online Safety Act 2021. This act, designed to empower the eSafety Commissioner, provides an essential framework for addressing online harm by requiring greater transparency from platforms and placing legal responsibility on social media companies for the content they amplify.

Through my research which aligns with the findings of the eSafety Commissioner, I found that the abuse teachers face isn’t just about direct attacks. It’s about how social media platforms enable and perpetuate that abuse through algorithms designed to boost engagement at any cost. When content targeting teachers goes viral, it’s often because these algorithms push harmful memes, videos, or posts to broader audiences, exponentially increasing the damage done. The viral nature of this content—whether a manipulated deepfake or a malicious rumour—means that even teachers not directly involved in an incident can experience secondary trauma as they witness their colleagues being publicly humiliated.

A tsunami of challenges

This paper is just the beginning. The introduction of legislation to address doxing and the growing awareness of deepfakes mark the start of a tsunami of challenges that educators will face in the coming years. Artificial intelligence, while offering immense potential in educational tools, also presents unprecedented risks to teachers’ rights, privacy, and mental health. The rise of AI-generated content, from fake images to deepfake videos, poses new threats that extend beyond traditional forms of bullying or harassment. Teachers now find themselves at the mercy of technologies that can create highly convincing false representations of them, which can spread across the internet in a matter of hours.

The proposed legislation and the growing awareness of AI-driven abuse are important first steps, but they are not nearly enough. Teachers are on the front lines, facing not only the pressure of educating young minds but also the terrifying reality of viral online abuse that can destroy their personal and professional lives in an instant. At the core of this issue is an urgent need to completely rethink teacher rights in the age of AI—and ensure these rights are clearly communicated and fiercely protected within the broader education system.

Safeguards for teachers

As technology races forward, so must the safeguards that protect those who dedicate their lives to teaching. Teachers, already in highly visible roles, are incredibly vulnerable to the kinds of threats that AI, doxing, and deepfakes bring. With just a few clicks, a phone can turn a teacher’s photo into a damaging meme or manipulated image, spreading across social media before the school day even ends. The psychological and emotional toll of this is devastating. Teachers need these psychosocial hazards to be mitigated against, as our workplaces include the ease with which a moment in the classroom can turn into a viral attack. This represents a seismic shift in the professional landscape for educators. 

We need a much larger conversation

While the new doxing legislation is a significant step forward, it is only the beginning of a much larger conversation about teacher rights at work, digital safety, and AI governance. My research highlights the urgent need for trauma-informed methodologies in addressing these issues – not just for students, but also for teachers – as well as the critical role that legislation, such as the Online Safety Act 2021, must play in shaping future protections. As AI continues to reshape our world, the rights and safety of teachers must be prioritized, ensuring that they can carry out their essential work without fear of becoming the next viral victim. This is a challenge we must face head-on, with comprehensive research, policy, and action.

 Janine Arantes is a researcher and educator at Victoria University, with a focus on the intersection of artificial intelligence (AI), digital safety and teacher wellbeing. Janine is the co-lead of the Teachers’ Rights and AI Network, an initiative that brings together educators, researchers, and policymakers to explore the implications of AI on teacher safety and to develop strategies that protect educators from emerging risks in the digital environment. With a background in educational technology, trauma-informed research, and policy advocacy, Janine’s work addresses the psychosocial risks associated with AI and its potential to disrupt traditional teaching environments. Find her on LinkedIn