Skip to main content
High Verified Investigation Opened

University of Hong Kong AI Deepfake Pornography Scandal

A University of Hong Kong law student used free AI software to generate 700 pornographic deepfake images of approximately 20-30 women including classmates, primary school classmates, and secondary school teachers. The university initially issued only a warning letter, sparking public outrage. Hong Kong's Privacy Commissioner opened a criminal investigation, exposing major gaps in Hong Kong law which only criminalizes distribution, not creation, of AI deepfakes.

AI System

AI deepfake generation tools (free online software)

Unknown (free online AI tools)

Occurred

February 1, 2025

Reported

July 12, 2025

Jurisdiction

HK

Platform

other

What Happened

In February 2025, a male law student at the University of Hong Kong (HKU) admitted to using free online AI software to generate pornographic deepfake images after friends discovered the images on his computer. He had sourced photographs from approximately 20-30 women's social media profiles — including university classmates, primary school classmates, and secondary school teachers — and used AI to create 700 sexually explicit images featuring their faces.

Although the images were reportedly not shared online, their existence caused immense distress. The incident became public on July 12, 2025, when victims posted accusations to Instagram.

HKU initially responded with only a warning letter and demand for apology, provoking public outrage over the perceived leniency. Hong Kong's Privacy Commissioner opened a criminal investigation on July 15. However, victims faced a legal gap: Hong Kong law criminalizes distribution of non-consensual intimate images (including AI-generated ones) but not their creation.

Victims were forced to continue sharing classroom spaces with the accused on at least four occasions, causing 'unnecessary psychological distress.'

The case exposed East Asia's broader legal blind spot regarding AI-generated non-consensual imagery and became a catalyst for reform debates across Hong Kong, Japan, and the region.

AI Behaviors Exhibited

Free AI deepfake tools generated realistic pornographic images from ordinary social media photos without any age verification, consent checking, identity protection, or content safety mechanisms.

How Harm Occurred

  • Non-consensual creation of sexually explicit AI imagery from publicly available photos
  • Proximity of perpetrator to victims (same university, shared classrooms) amplified psychological impact
  • Legal gap preventing prosecution created sense of impunity and further distressed victims
  • University's lenient initial response compounded harm

Outcome

Ongoing
  • February 2025: Student admitted to generating images after friends discovered them on his computer
  • July 12, 2025: Accusations posted to Instagram by victims, going public
  • University of Hong Kong initially issued only a warning letter and demanded apology — sparking widespread public outrage over perceived leniency
  • July 15, 2025: Hong Kong Office of the Privacy Commissioner for Personal Data opened criminal investigation
  • As of August 2025, victims have not pressed criminal charges, partly due to lack of clear legal basis — Hong Kong law only criminalizes distribution of intimate AI images, not their creation
  • Victims were compelled to continue sharing classroom spaces with the accused, causing ongoing psychological distress
  • Case became catalyst for legal reform debate in Hong Kong and across East Asia regarding AI-generated non-consensual imagery

Harm Categories

Third Party Harm FacilitationPsychological Manipulation

Contributing Factors

freely accessible ai toolssocial media sourcingperpetrator proximitylegal gapinstitutional leniencycontinued proximity

Victim

Approximately 20-30 women including the perpetrator's university classmates, primary school classmates, and secondary school teachers. Images sourced from victims' social media profiles.

Cite This Incident

APA

NOPE. (2025). University of Hong Kong AI Deepfake Pornography Scandal. AI Harm Tracker. https://nope.net/incidents/2025-hku-deepfake-scandal

BibTeX

@misc{2025_hku_deepfake_scandal,
  title = {University of Hong Kong AI Deepfake Pornography Scandal},
  author = {NOPE},
  year = {2025},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2025-hku-deepfake-scandal}
}