Commentary: It’s disturbing that deepfake porn and online harms are seen as unavoidable in our digital lives

1 month ago 105

SINGAPORE: In November, shocking news broke that the police were investigating teenage students from the Singapore Sports School for generating and circulating deepfake nude photos of their female schoolmates.

Later that month, five ministers in Singapore and over 100 public servants across 31 government agencies received extortionary emails, demanding cryptocurrency payment in return for not publishing doctored images of them in compromising positions.

These are Singapore’s latest cases of artificial intelligence (AI)-created deepfake sexual content – they will certainly not be the last, not here, not globally.

In 2017, a Reddit thread offering fake videos of “Taylor Swift” having sex amassed 90,000 subscribers before being taken down eight weeks later. Last year in a small Spanish town, more than 20 young girls found their AI-generated nude photos circulating, created by teen boys accessing innocent photos off social media.

AI may be trumpeted as the next big revolution, but the threat it poses is deeply nefarious.

SINGAPORE TAKES ACTION

Even before the Sports School incident, authorities in Singapore were girding against this new wave of online assault, with legislation passed or proposed along three prongs.

The first is to regulate platforms where online content is accessed. The Broadcasting Act was amended in 2023, allowing the Infocomm Media Development Authority (IMDA) to direct social media services – the gatekeepers of our cyber world – to block or remove egregious content within specified timelines and direct them to adhere to an online Code of Practice.

Second, crimes in the analogue world but with a digital element can now be more effectively targeted, prevented and prosecuted. The Online Criminal Harms Act passed last year empowers authorities to issue directions to online service providers to restrict Singapore users&...

Read Entire Article