The latest groundswell of legislative activity aimed at the tech sector concerns the question of who has the right to use a person’s likeness. Whereas existing licensing arrangements might limit how a famous person’s image is used, generative AI has amplified the problem with powerful deepfake tools, and the risk has spread beyond the Hollywood elite. The problem is perhaps most bluntly illustrated in the existence of nudify apps, which generate explicit deepfakes without a subject’s consent.

The harms can be significant, particularly for young people (and particularly young women). Research from UK regulator Ofcom shows the vast majority of sexually explicit deepfakes are of women, many of whom suffer from PTSD or anxiety as a result of being targeted. Deepfakes, Ofcom says, are “already doing serious harm to ordinary individuals – whether that is by being featured in nonconsensual sexual deepfake videos or falling victim to deepfake romance scams and fraudulent adverts.” 404 Media recently reported on the proliferation of videos, generated using OpenAI’s Sora 2, that show teenage girls being strangled.

Nations are beginning to clue into the extent of the deepfake threat, and some are responding with regulations. Denmark is expected to pass a bill that would amend its copyright law to ban the sharing of deepfakes, in an attempt to protect citizens’ likeness or voice from being used nefariously. Effectively, it would offer those personal traits similar protections to those for biometrics.

Now, Australia is pursuing a similar law to limit misuse of people’s likenesses.

‘You should own your face’: Pocock pushes for likeness law

ABC news reports that independent senator David Pocock has introduced a new proposal before federal parliament, which he says “would put into legislation that your face, voice and likeness is yours.”

“To me, this seems like a bit of a no-brainer. You should own your face.”

Pocock is concerned that the government is moving too slowly to make sure laws reflect the state of reality when it comes to deepfakes. At present, he says, “unless a deepfake is sexually explicit, there’s very little that you can do as an Australian” to prosecute whoever created it.

But scams, commercial exploitation, disinformation and other misappropriations are becoming more common. Pocock calls the deepfake threat “a huge freight train that is coming at us.”

His bill proposes adding a dedicated complaints framework to the Online Safety Act, which would grant the eSafety Commissioner powers to demand deepfake removals and issue immediate fines. Individuals who share nonconsensual deepfakes face a penalty of 165,000 dollars (about 106,000 U.S.) up-front. Companies that fail to comply with a removal notice could pay as much as 825,000 dollars (about 533,000 U.S.).

Moreover, proposed changes to the Privacy Act would allow Australian citizens to bring civil lawsuits and sue perpetrators directly for financial compensation if they can prove that they suffered “emotional harm.”

Pocock says “we have to draw a line in the sand and say, this is not on – you cannot deepfake someone without their consent.”

Policymakers worry government is dragging its feet on AI laws

His colleague, independent MP Kate Chaney, is introducing a bill to criminalize the use of AI tools purpose-built to create child sexual abuse material. She says the government is “missing in action” on AI regulation.

“The US, the UK, Canada, Japan, Singapore all have an equivalent,” she says. “Australia has supported the idea of them but has not yet actually taken action on putting one forward. We can’t afford to sit around twiddling our thumbs wondering what to do about AI while it is changing so rapidly.”

The government this year reoriented its AI messaging to focus on the economic promise of the technology, and says it is not rushing into regulation.

Article Topics

Australia  |  biometric identifiers  |  deepfake detection  |  deepfakes  |  Denmark  |  digital identity  |  generative AI  |  legislation

Latest Biometrics News


 

Nov 24, 2025, 3:22 pm EST

False accusations of shoplifting against a man in Cardiff, Wales are the result of human error, rather than a false…


 

Nov 24, 2025, 3:14 pm EST

The Tony Blair Institute (TBI) has published a new paper, “Rebooting the UK’s Tech-Diffusion Ecosystem to Drive Growth,” which urges…


 

Nov 24, 2025, 3:02 pm EST

Malaysia is following Australia’s lead, with a plan to implement age assurance requirements for social media platforms and a prohibition…


 

Nov 24, 2025, 2:57 pm EST

Newly released documents show that Meta scrapped internal research into the mental health effects of Facebook and Instagram, once it…


 

Nov 24, 2025, 1:52 pm EST

The Federal Bureau of Investigations (FBI) has issued a new Request for Information (RFI) seeking AI tools that are capable…


 

Nov 24, 2025, 1:08 pm EST

The UK’s government digital ID plan continues to take shape piecemeal, with an expansion to cover a new layer of…