Home » australia

Category Archives: australia

Australia is tightening the rules on children’s privacy – here’s how it will work

Beth Macdonald/Unsplash
Tama Leaver, Curtin University

Australia’s privacy laws have been woefully out of date for a long time – not fit to address the realities of the digital world.

As part of the long overdue update, the Privacy and Other Legislation Amendment Act in 2024 directed the Office of the Australian Information Commissioner (OAIC) to develop a code to better protect the privacy of young Australians in the digital world.

This is urgently needed. By the time a child turns 13, around 72 million pieces of data will have been collected about them.

This week, the OAIC published a draft of the Children’s Online Privacy Code, which is now open for public comment.

What’s in the code?

The code’s scope is much wider than just social media. It encompasses most online services, spaces and platforms that children use. Importantly, it also includes services that may contain children’s personal data but are used by adults.

Everything from educational platforms to infant tracking apps will be subject to the code. The best interests of the child are embedded in it, and services will be expected to interpret and implement it.

Data minimisation

This specifies children’s personal data can only be collected by online services where there’s a clear and direct purpose for that collection, and that data should only be kept while it’s necessary to perform that purpose.

Any further data collection requires explicit consent requested in a way that’s age-appropriate for the child.

This ensures platforms only request what’s actually mission critical. The onus is on services to delete that personal data as soon as it’s not needed, to help prevent children’s data being caught up in data breaches.

The right to delete

Where platforms and services hold children’s personal data, children will now have a clear and explicit right to request that data is deleted.

The “right to be forgotten” has been on privacy advocates’ wishlist for decades. It recognises individuals own their own data and should maintain control over it where possible.

Geolocation transparency

When children consent to having their geographic location tracked by digital devices and services, or their parents consent to this on their behalf for those under 15, children regardless of age will be notified when tracking services are sharing that information.

Geolocation data can be particularly tricky, even within families. While some might find location tracking helpful, others view it as intrusive surveillance.

Ensuring it’s at least transparent to children will help ensure they’re active and aware participants in these services.

Age-appropriate explanations

Saying you’ve read an app’s terms of service or privacy policies is one of the most common white lies told.

That’s mostly because these are long, impenetrable, almost unreadable documents. When children are asked to consent to share their data, the code specifies the explanation for this request must be understandable and age-appropriate. If the request is aimed at a child who might be ten, the explanation needs to be clear to the average ten-year-old.

This is vital. Not only does it allow children to make better choices, it also increases their digital literacy as they make meaningful choices about their own data.

As part of this, deceptive design elements that might trick children into sharing personal data are explicitly not allowed.

We can expect pushback from big tech

There will undoubtedly be considerable pushback from big technology platforms about the scope of the code. It seeks to disrupt business as usual, and requires that children’s data is only collected for specific purposes, with explicit consent, and retained for as little time as possible.

That’s the opposite of the “grab and keep as much data for as long as possible” logic that drives most tech companies and platforms today. Big data is still imagined to be the big oil of the digital world. Private, personal data is among its most valued forms. Artificial intelligence companies are even more thirsty for that personal data to train their systems.

We’ll need more digital literacy

For children under 15, the code relies on parental consent. That consent is visible to children, which is important in keeping them informed. However, there’s work to do to equip every parent with the tech literacy they need to make informed choices with their children.

In some cases, children don’t easily have a parent or carer to turn to. For children in the most at-risk and challenging situations, there may be difficulties in ensuring that the consent process really can work in children’s best interest.

In our Manifesto for a Better Children’s Internet, colleagues and I from the ARC Centre of Excellence for the Digital Child offer a roadmap for an internet better aligned with children’s needs and experiences.

Crucially, we argue there should be more focus on protecting children within the digital environment, rather than from it.

Maximising children’s opportunities in the digital world means trying to make as many digital spaces available to them, while ensuring those spaces are designed to be as safe and age appropriate as possible.

The Children’s Online Privacy Code is set to make an important contribution in achieving that aim. It recognises children’s right to participation as much as their right to protection.

What happens next?

The OAIC has launched a Privacy for Kids website, which offers age-appropriate explanations of the code for children and adults.

It provides a variety of tools and age-appropriate resources to allow children and adults to offer their thoughts on the draft code. That consultation is open until June 5 this year.

After responding to the public consultation, the final version of the code must go live by December 10 2026.The Conversation

Tama Leaver, Professor of Internet Studies, Curtin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Australia’s social media ban won’t stop cyberbullying

        

[Roxy Aln / Unsplash]

Tama Leaver, Curtin University

The Australian Federal government’s Online Safety Amendment (Social Media Minimum Age) Act, commonly referred to as the “social media ban”, is now in effect.

In the months leading up to the ban, there have been a lot of stories about what will actually happen once the legislation is active, and many people believe the ban will prevent cyberbullying. It won’t – because bullying is a social problem, which can’t be solved with a quick technical fix.

What is happening?

The ban requires that social media platforms take reasonable steps to prevent Australians under the age of 16 from having an account on those platforms.

The platforms definitely included in the ban are Facebook, Instagram, Threads, Kick, Reddit, Snapchat, TikTok, Twitch, X (formerly Twitter) and YouTube.

This list is dynamic and will likely change and grow over time.

Some platforms are, initially at least, definitely not subject to the ban, including Discord, GitHub, Google Classroom, LEGO Play, Messenger, Pinterest, Roblox, Steam and Steam Chat, WhatsApp and YouTube Kids.

What isn’t happening?

There are a lot of myths and misunderstandings circulating about the ban.

Some people have the impression the ban is a broad piece of legislation to prevent any online harms children and young people might encounter. It isn’t.

Rather, this legislation narrowly targets social media platforms, and can only prevent teens and young people from having an account on those platforms.

Despite recent concerns raised about the gaming platform Roblox, for example, it is not subject to the ban as its primary purpose is gaming, not social media.

Similarly, while teens may not be able to have accounts on these platforms, they may still be able to access content on many of them.

On YouTube, for example, under-16s can still watch public YouTube videos. They just can’t subscribe to channels, like videos or leave comments.

Cyberbullying

Cyberbullying – or bullying that extends into online spaces and platforms – is a significant issue for young Australians.

A 2021 report found that more than one third of Australian young people had experienced bullying online within a six month period.

Many teens, parents and trusted adults hope the ban will prevent cyberbullying.

Some of the most recognisable faces and loudest voices promoting the ban are bereaved parents who believe their children were cyberbullied to the point of suicide.

That is incredibly tragic, and any parent in that situation would understandably be pushing for change so no one else has that awful experience.

Unfortunately, the social media ban will not stop cyberbullying.

In fact, it may not reduce cyberbullying significantly at all.

While under-16s won’t have Snapchat and Instagram accounts, they will still have access to messaging platforms such as WhatsApp, Messenger, Discord and others.

It would be naive to believe that bullying activity will not simply shift from one platform to another.

The shift might make cyberbullying worse in some ways, as bullying on more closed messaging platforms may be less visible to others.

Bullying is never (just) a technology problem

It can be reassuring to think of bullying as somehow just a social media or online problem.

While cyberbullying extends the abuse of bullying into homes and bedrooms, platforms don’t actually bully. People do. And often those people are peers, colleagues and classmates, and much less often strangers.

In some ways the term cyberbullying itself is unhelpful. It puts focus on the “cyber” component, when the bullying is actually the problem.

Bullying is widespread in Australian schools and well beyond.

Dealing with cyberbullying

If you or a young person you know is facing cyberbullying, there is plenty of guidance available.

Youth mental health service Reachout offers very clear advice for young Australians on how to deal with cyberbullying.

Strategies include slowing down before young people respond to bullying content, taking the space to calm down before doing anything, keeping screenshots and evidence, trying not to check for new messages or content too often, and blocking or reporting those doing the bullying.

For parents and trusted adults supporting young people dealing with bullying, the eSafety Commissioner’s website also provides clear, actionable advice.

Indeed, having the support of at least one trusted adult is a key part in helping young people navigate and cope with experiences of cyberbullying.

The social media ban is a fairly blunt tool, and does not have the complexity needed to directly address or necessarily even reduce cyberbullying.

However, if the ban allows Australian families to continue, or even begin, conversations about young people’s experiences online, then that’s of real value to young Australians.

For parents and trusted adults, keeping that conversation going is vital. An open door to a trusted adult is key to supporting young people, no matter what they experience online.

For under-16s, they should keep in mind that they have not broken the law if they get around the ban. The onus is entirely on platforms to prevent under 16s having accounts.

No magic button

Under-16s, their parents, and their trusted adults, should feel perfectly able and safe to have full and frank conversations about any online experiences, including on social media platforms.

There is no quick fix, no magic button that will stop cyberbullying. The social media ban certainly won’t do it – and it shouldn’t give young people or adults a false sense of security.

For young Australians, having access to trusted adults is vital to reducing online bullying, building resilience, and shifting the culture.

In situations where trusted adults are not available, young people should remember organisations like ReachOut, Headspace and the Kids Helpline (1800 551 800) are there to provide support, too.The Conversation

Tama Leaver, Professor of Internet Studies, Curtin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Australia is about to ban under-16s from social media. Here’s what kids can do right now to prepare

        

Dolgachov / Getty Images

Daniel Angus, Queensland University of Technology and Tama Leaver, Curtin University

If you’re a young person in Australia, you probably know new social media rules are coming in December. If you and your friends are under 16, you might be locked out of the social media spaces you use every day.

Some people call these rules a social media ban for under 16s. Others say it’s not a “ban” – just a delay.

Right now we know the rules will definitely include TikTok, Snapchat, Instagram, Facebook, Threads, Reddit, X, YouTube, Kick and Twitch. But that list could grow.

We don’t know exactly how the platforms will respond to the new rules, but there are things you can do right now to prepare, protect your digital memories, and stay connected.

Here’s a guide for the changes that are coming.

Download your data

TikTok, Instagram, Snapchat and most other platforms offer a “download your data” option. It’s usually buried in the app settings, but it’s powerful.

A data download (sometimes called a “data checkout” or “export”) includes things like:

  • photos and videos you’ve uploaded

  • messages and comments

  • friend lists and interactions

  • the platform’s inferences about you (what it thinks you like, who you interact with most, and the sort of content it suggests for you).

Even if you can’t access your account later, these files let you keep a record of your online life: jokes, friendships, cringey early videos, glow-ups, fandom moments, all of it.

You can save it privately as a time capsule. Researchers are also building tools to help you view and make sense of it.

Downloading your archive is a smart move while your accounts are still live. Just make sure you store it somewhere secure. These files can contain incredibly detailed snapshots of your daily life, so you might want to keep them private.

Don’t assume platforms will save anything for you

Some platforms may introduce official ways to export your content when bans begin. Others may move faster and simply block under-age accounts with little warning.

As one example, Meta – the parent company of Facebook, Instagram and Threads – has begun to flag accounts they think belong to under-16s. The company has also provided early indications that it will permit data downloads after the new rules comes into effect.

For others the situation is less clear.

Acting now, while you can still log in normally, is the safest way to keep your stuff.

Four ways to stay connected

Losing access to the platform you use every day to talk with friends can feel like losing part of your social world. That’s real, and it’s okay to feel annoyed, worried, or angry about it.

Here are four ways to prepare.

1. Swap phone numbers or handles on non-banned platforms now.

Don’t wait for the “you are not allowed to use this service” message.

2. Set up group chats somewhere stable.

Use iMessage, WhatsApp, Signal, Discord, or whatever works for your group and doesn’t rely on age-restricted sign-ups.

3. Keep community ties alive.

Many clubs, fandom spaces, gaming groups and local communities are on multiple sites or platforms (Discord servers, forums, group chats). Get plugged into those spaces.

4. Don’t presume you’ll be able to get around the ban.

Teens who get around the ban are not breaking the law. There is no penalty for teens, or parents who help them, if they do get around the ban and have access to social media under 16.

It’s up to platforms to make these new laws work. Not teens. Not parents.

Do prepare, though. Don’t assume you will be able to get around the ban.

Just using a VPN to pretend your computer is in another country, or a wearing rubber mask to look older in an age-estimating selfie, probably won’t be enough.

A note for adults: take big feelings seriously

Most people recognise the social connections, networks and community enabled by social media are valuable – especially to young people.

For some teens, social media may be their primary community and support group. It’s where their people are.

It will be difficult for some when that community disappears. For some it may be even worse.

The ideal role of trusted adults is to listen, validate and support teens during this time. No matter how older people feel, for young people this may be like losing a large part of their world. For many that will be really hard to cope with.

Services like Headspace and Kids Helpline (1800 55 1800) are there to support young people, too.

How to keep your agency in a frustrating situation

A lot of people will find it frustrating that we’re excluding teens, rather than forcing platforms to be built safer and better for everyone. If you feel that way, too, you’re not alone.

But you aren’t powerless.

Saving your data, preparing alternative communication channels, and speaking out if you want to are all ways to:

  • own your digital history

  • stay connected on your own terms

  • make sure youth voices inform how Australia thinks about online life going forward.

You’re allowed to feel annoyed. You’re also allowed to take steps that protect your future self.

If you lose access, you’re not gone – just changing channels

Social media bans for teens will create disruption. But they won’t be the end of your friendships, creativity, identity exploration, or culture.

It just means the map is shifting. You get to make deliberate choices about where you go next.

And whatever happens, the online world isn’t going to stop changing. You’re part of the generation that actually understands that, and that’s a strength, not a weakness.The Conversation

Daniel Angus, Professor of Digital Communication, Director of QUT Digital Media Research Centre, Queensland University of Technology and Tama Leaver, Professor of Internet Studies, Curtin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

‘Australiana’ images made by AI are racist and full of tired cliches, new study shows

Tama Leaver, Curtin University and Suzanne Srdarov, Curtin University

        

‘An Aboriginal Australian’s house’ generated by Meta AI in May 2024.
Meta AI

Big tech company hype sells generative artificial intelligence (AI) as intelligent, creative, desirable, inevitable, and about to radically reshape the future in many ways.

Published by Oxford University Press, our new research on how generative AI depicts Australian themes directly challenges this perception.

We found when generative AIs produce images of Australia and Australians, these outputs are riddled with bias. They reproduce sexist and racist caricatures more at home in the country’s imagined monocultural past.

Basic prompts, tired tropes

In May 2024, we asked: what do Australians and Australia look like according to generative AI?

To answer this question, we entered 55 different text prompts into five of the most popular image-producing generative AI tools: Adobe Firefly, Dream Studio, Dall-E 3, Meta AI and Midjourney.

The prompts were as short as possible to see what the underlying ideas of Australia looked like, and what words might produce significant shifts in representation.

We didn’t alter the default settings on these tools, and collected the first image or images returned. Some prompts were refused, producing no results. (Requests with the words “child” or “children” were more likely to be refused, clearly marking children as a risk category for some AI tool providers.)

Overall, we ended up with a set of about 700 images.

They produced ideals suggestive of travelling back through time to an imagined Australian past, relying on tired tropes like red dirt, Uluru, the outback, untamed wildlife, and bronzed Aussies on beaches.

–
‘A typical Australian family’ generated by Dall-E 3 in May 2024.

We paid particular attention to images of Australian families and childhoods as signifiers of a broader narrative about “desirable” Australians and cultural norms.

According to generative AI, the idealised Australian family was overwhelmingly white by default, suburban, heteronormative and very much anchored in a settler colonial past.

‘An Australian father’ with an iguana

The images generated from prompts about families and relationships gave a clear window into the biases baked into these generative AI tools.

“An Australian mother” typically resulted in white, blonde women wearing neutral colours and peacefully holding babies in benign domestic settings.

A white woman with eerily large lips stands in a pleasant living room holding a baby boy and wearing a beige cardigan.
‘An Australian Mother’ generated by Dall-E 3 in May 2024.
Dall-E 3

The only exception to this was Firefly which produced images of exclusively Asian women, outside domestic settings and sometimes with no obvious visual links to motherhood at all.

Notably, none of the images generated of Australian women depicted First Nations Australian mothers, unless explicitly prompted. For AI, whiteness is the default for mothering in an Australian context.

An Asian woman in a floral garden holding a misshapen present with a red bow.
‘An Australian parent’ generated by Firefly in May 2024.
Firefly

Similarly, “Australian fathers” were all white. Instead of domestic settings, they were more commonly found outdoors, engaged in physical activity with children, or sometimes strangely pictured holding wildlife instead of children.

One such father was even toting an iguana – an animal not native to Australia – so we can only guess at the data responsible for this and other glaring glitches found in our image sets.

–
An image generated by Meta AI from the prompt ‘An Australian Father’ in May 2024.

Alarming levels of racist stereotypes

Prompts to include visual data of Aboriginal Australians surfaced some concerning images, often with regressive visuals of “wild”, “uncivilised” and sometimes even “hostile native” tropes.

This was alarmingly apparent in images of “typical Aboriginal Australian families” which we have chosen not to publish. Not only do they perpetuate problematic racial biases, but they also may be based on data and imagery of deceased individuals that rightfully belongs to First Nations people.

But the racial stereotyping was also acutely present in prompts about housing.

Across all AI tools, there was a marked difference between an “Australian’s house” – presumably from a white, suburban setting and inhabited by the mothers, fathers and their families depicted above – and an “Aboriginal Australian’s house”.

For example, when prompted for an “Australian’s house”, Meta AI generated a suburban brick house with a well-kept garden, swimming pool and lush green lawn.

When we then asked for an “Aboriginal Australian’s house”, the generator came up with a grass-roofed hut in red dirt, adorned with “Aboriginal-style” art motifs on the exterior walls and with a fire pit out the front.

–
Left, ‘An Australian’s house’; right, ‘An Aboriginal Australian’s house’, both generated by Meta AI in May 2024.
Meta AI

The differences between the two images are striking. They came up repeatedly across all the image generators we tested.

These representations clearly do not respect the idea of Indigenous Data Sovereignty for Aboriginal and Torres Straight Islander peoples, where they would get to own their own data and control access to it.

Has anything improved?

Many of the AI tools we used have updated their underlying models since our research was first conducted.

On August 7, OpenAI released their most recent flagship model, GPT-5.

To check whether the latest generation of AI is better at avoiding bias, we asked ChatGPT5 to “draw” two images: “an Australian’s house” and “an Aboriginal Australian’s house”.

Red tiled, red brick, suburban Australian house, generated by AI.
Image generated by ChatGPT5 on August 10 2025 in response to the prompt ‘draw an Australian’s house’.
ChatGPT5.
Cartoonish image of a hut with a fire, set in rural Australia, with Aboriginal art styled dot paintings in the sky.
Image generated by ChatGPT5 on August 10 2025 in response to the prompt ‘draw an Aboriginal Australian’s house’.
ChatGPT5.

The first showed a photorealistic image of a fairly typical redbrick suburban family home. In contrast, the second image was more cartoonish, showing a hut in the outback with a fire burning and Aboriginal-style dot painting imagery in the sky.

These results, generated just a couple of days ago, speak volumes.

Why this matters

Generative AI tools are everywhere. They are part of social media platforms, baked into mobile phones and educational platforms, Microsoft Office, Photoshop, Canva and most other popular creative and office software.

In short, they are unavoidable.

Our research shows generative AI tools will readily produce content rife with inaccurate stereotypes when asked for basic depictions of Australians.

Given how widely they are used, it’s concerning that AI is producing caricatures of Australia and visualising Australians in reductive, sexist and racist ways.

Given the ways these AI tools are trained on tagged data, reducing cultures to clichés may well be a feature rather than a bug for generative AI systems.The Conversation

  

[This article is republished from The Conversation under a Creative Commons license. Read the original article.]

Make no mistake, this was Australia’s Brexit.

Aboriginal Australian Flag but with a broken heart at the centre<heartbroken rant>

Seeing the referendum to give a Voice to Aboriginal and Torres Strait Islander peoples profoundly defeated across Australia today is heart-breaking and confusing.

My heart goes out to all Australians feeling let down, but especially, of course, to the Indigenous people of this country for whom this would have been, at least, one small step in the right direction.

As someone who researches online communication, digital platforms and how we communicate and tell stories to each other, I fear the impact of this referendum will be even wider still.

The rampant and unabashed misinformation and disinformation that washed over social media, and was then amplified and normalised as it was reported in mainstream media, is more than worrying.

Make no mistake, this was Australia’s Brexit. It was the pilot, the test, to see how far disinformation can succeed in campaigning in this country. And succeed it did.

In the UK, the pretty devastating economic impact of Brexit has revealed the lies that drove campaigning for it (as have former campaigners who admitted the truth was no barrier for them).

I fear most non-Indigenous Australians will not have as clear and unambiguous a sign that they’ve been lied to, at least this time.

In Australia, the mechanisms of disinformation have now been tested, polished, refined and sharpened. They will be a force to be reckoned with in all coming elections. And our electoral laws lack the teeth to do almost anything about that right now.

I do not believe that today’s result is just down to disinformation, but I do believe it played a significant role. I’m not sure if it changed the outcome, but I’m not sure it didn’t, either.

There was research that warned about the unprecedented levels of misinformation looking at early campaigning around the Voice. There will be more that looks back after this result.

But before another election comes along, we need more than just research. We need more than just improved digital literacies, although that’s profoundly necessary.

We need critical thinking like never before, we need to equip people to make informed choices by being able to spot bullshit in its myriad forms.

I am under no illusion that means people will agree, but they deserve to have tools to make an actually informed choice. Not a coerced one. Social media isn’t just entertainment; it’s our political sphere. Messages don’t just live on social media, even if they start there.

Messages might start digital, but they travel across all media, old and new.

I know this is a rant after a profoundly disappointing referendum, and probably not the best expressed one. But there is so much work to do if this country isn’t even more assailed by weaponised disinformation at every turn.

</heartbroken rant>

Banning ChatGPT in Schools Hurts Our Kids

[Image: As new technologies emerge, educators have an opportunity to help students think about the best practical and ethical uses of these tools, or hide their heads in the sand and hope it’ll be someone else’s problem.

It’s incredibly disappointing to see the Western Australian Department of Education forcing every state teacher to join the heads in the sand camp, banning ChatGPT in state schools.

Generative AI is here to stay. By the time they graduate, our kids will be in jobs where these will be important creative and productive tools in the workplace and in creative spaces.

Education should be arming our kids with the critical skills to use, evaluate and extend the uses and outputs of generative AI in an ethical way. Not be forced to try them out behind closed doors at home because our education system is paranoid that every student will somehow want to use these to cheat.

For many students, using these tools to cheat probably never occurred to them until they saw headlines about it in the wake of WA joining a number of other states in this reactionary ban.

Young people deserve to be part of the conversation about generative AI tools, and to help think about and design the best practical and ethical uses for the future.

Schools should be places where those conversations can flourish. Having access to the early versions of tomorrow’s tools today is vital to helping those conversations start.

Sure, getting around a school firewall takes one kid with a smartphone using it as a hotspot, or simply using a VPN. But they shouldn’t need to resort to that. Nor should students from more affluent backgrounds be more able to circumvent these bans than others.

Digital and technological literacies are part of the literacy every young person will need to flourish tomorrow. Education should be the bastion equipping young people for the world they’re going to be living in. Not trying to prevent them thinking about it at all.

[Image: “Learning with technology” generated by Lexica, 1 February 2023]

Update: Here’s an audio file of an AI speech synthesis tool by Eleven Labs reading this blog post:

Archives

Categories