BEINSMARTSIDE Australia Deepfake apps to be banned under latest technology crackdown

Deepfake apps to be banned under latest technology crackdown

Deepfake apps to be banned under latest technology crackdown post thumbnail image

Landmark laws targeting the disturbing rise of deepfake porn will be introduced to federal parliament.

The government said it will crack down on sexually-explicit AI-generated content by banning deepfake apps which create nude images.

Deepfakes are images or vision manipulated using a person’s face or body to make it appear they are doing something which never happened, usually a sexual act.

READ MORE: Fears the country’s most wanted man is ‘getting help’

The fresh legislation will restrict access or ban online tools which “nudify” images of people.

Online stalking apps will also be captured in the legislation introduced to parliament.

“There is a place for AI and legitimate tracking technology in Australia but there is no place for apps and technologies that are used solely to abuse, humiliate and harm people, especially our children,” said Minister for Communications Anika Wells.

“That’s why the Albanese government will use every lever at our disposal to restrict access to nudification and undetectable online stalking apps and keep Australians safer from the serious harms they cause.

“This is too important for us not to act.

“Abusive technologies are widely and easily accessible and are causing real and irreparable damage now.”

Wells said the onus will be on big tech companies to prevent the availability of these apps once the ban is enshrined.

The legislation will be a significant step in protecting children online after social media apps were banned for Australians aged under 16 earlier this year, Wells said.

READ MORE: CEO fired for ‘inappropriate’ relationship with employee

Minister for Aged Care and Minister for Sport Anika Wells during a press conference at Parliament House in Canberra on Monday 25 November 2024. fedpol Photo: Alex Ellinghausen

It is estimated 98 per cent of deepfakes circulating online are pornographic and most victims depicted are women.

eSafety research found three per cent of children reported having a non-consensual fake nude image made of themselves.

Political leaders and child advocates also met today in Canberra to discuss urgent action and AI reform to ensure child protection is front of mind as the technology rapidly evolves.

The roundtable, hosted by the International Centre for Missing and Exploited Children (ICMEC) Australia, called for baseline AI training for police investigations, a 120-day review of facial recognition capability and a nationwide prevention and awareness campaign on AI-enabled harms.

READ MORE: Xi and Putin want to lead the world. They are slowly making it happen

“We recognise AI as both a sword and a shield in the child protection landscape,” said ICMEC Australia chief executive Colm Gannon.

“AI can help safeguard children faster and reduce workplace harm for investigations by annotating or blurring harmful imagery.

“But Australia needs a clear, funded roadmap to harness the good and shut down the harm.”

The NSW government last month introduced laws into parliament which made sexually-explicit deepfakes illegal.

DOWNLOAD THE 9NEWS APP: Stay across all the latest in breaking news, sport, politics and the weather via our news app and get notifications sent straight to your smartphone. Available on the Apple App Store and Google Play.

Leave a Reply

Your email address will not be published.

Related Post