Loading...
NüStories Magazine

Hong Kong Law Students Pursue Justice Against AI Deepfakes

BY KELLY YU

When women law students at the University of Hong Kong (HKU) learned a male classmate had made explicit pornographic images by manipulating their social media photos, they expected swift justice. Instead, they find themselves battling a legal grey area.

“It’s like someone sneaking into my home, stripping me naked, and taking photos without my consent,” said Audrey*, a victim-turned-campaigner, who told NüVoices she chose her pseudonym for its meaning, “noble strength.” 

The case involved a male law student, X, who created AI-generated pornographic images of at least 20 friends, classmates, and even his former high-school teachers without their consent. His girlfriend saw the images and told classmates in February. 

He had more than 700 files, original photos screenshotted from his victims’ social media feeds, and altered ones, which were meticulously organised in folders named after each victim.

Bewilderment and mistrust 

Emily*, another victim, told NüVoices she had considered X a friend who seemed “conventional and ethical.”

“But betrayal wasn’t the strongest feeling. What I felt most was bewilderment—why would a friend do this to me? It led to a lot of self-reflection and a greater sense of distrust,” she said.

Alerted by X’s girlfriend, a group of HKU victims reported the AI porn to the university’s disciplinary committee. However, university officials decided X’s actions did not constitute an offence the committee could address.

In a statement on July 17, HKU said it had conducted an internal investigation and issued X with a warning letter, after which he had “voluntarily withdrawn from a year-long overseas academic exchange programme in the upcoming school year.” He also wrote an apology. 

“The university’s response was insufficient to hold him accountable,” said Emily, who objected to being forced to share classrooms with the perpetrator for weeks. Only in the final class of the semester did HKU arrange a separate study session for his victims. 

Legal limbo

In July, the victims filed a complaint with Hong Kong’s Equal Opportunities Commission (EOC) on the grounds of sexual harassment. The EOC investigates complaints and helps with conciliation or legal action. In early September, they were told their case did not meet the legal definition of sexual harassment, so it could not proceed. 

NüVoices asked for comment and was told only that “The EOC observes the principle of confidentiality in handling enquiries and complaints and will not disclose the status and details of a particular case to the public. 

According to the EOC’s website, sexual harassment is defined as unwelcome sexual conduct that offends, humiliates, intimidates, or creates a hostile environment, including unwanted dating attempts, sexual comments, physical touching, displaying sexually explicit materials, and sexual assault.

Facing yet another institutional dead end, the victims requested an explanation for the EOC’s decision. “(We) can’t say we’re satisfied, but we actually quite expected this outcome,” they told NüVoices in a written reply.

Deepfake intimate images are a growing trend in Hong Kong, exposing shortcomings in the law. According to RainLily, a sexual violence crisis center in Hong Kong, it received 11 requests for assistance involving sexualized synthetic or deepfake images between 2024 and 2025, compared to eight cases from 2023 to 2024 and seven from 2022 to 2023.

Doris Chong, the center’s executive director, warned that anyone can easily create deepfake photos on their smartphones now due to recent advancements in AI technology. 

“The images being created now are much harder to identify as fake compared to those from a few years ago,” Chong said. “Many of these digital sex violence cases target women, eroding both their freedom in online spaces and their sense of safety when using the internet.”

Victims suffer fear and anxiety, made worse by uncertainty about where their images might appear and who might see them, she added.

As Chong explained, most never imagined that a random photo shared on social media could be turned into fake AI images, and some have deactivated their accounts entirely. 

Emily now takes precautions on Instagram, ensuring she writes “a word or two to cover my face. So if someone wants to screenshot it, there would be something to cover my face.” 

Outdated laws

According to Chong, there is a “pressing need” to establish legal frameworks that regulate AI-generated pornography.

“Hong Kong’s laws on sexual violence use outdated ideas that put responsibility on victims — they must say ‘No’. Meanwhile, many foreign laws focus on whether the person doing the act asked for consent first,” Chong said. “For laws about AI-generated images, we shouldn’t just focus on whether using the technology is wrong, but on whether someone got consent when using it.”

Education is equally important because legal changes can be lengthy in Hong Kong, Chong added. The HKU case highlights significant gaps in the city’s legal framework on digital sexual violence. Publishing or distributing sexual images without consent is illegal; however, creating them is not explicitly prohibited.

“Broadly speaking, creation of sexually explicit imagery is not a crime. Artistic creation is not the same as voyeurism or the distribution of ‘real’ intimate images,” said Stuart Hargreaves, an associate professor of law at the Chinese University of Hong Kong (CUHK).

The same shortcomings exist in the obscenity laws.

The Control of Obscene and Indecent Articles Ordinance outlaws publishing obscene articles, but possessing or creating them is not illegal, according to Eric Chan, a solicitor familiar with the case.

Similarly, the Crimes Ordinance prohibits the publication or threats of publication of intimate images without consent when there is intent to cause distress. The mere creation or possession of such images is not criminalized.

“These offences only target publication. In any event, I think it’s unlikely that uploading the image to an AI platform to create a synthetic image can count as ‘publication’,” Chan told NüVoices. 

Paths to reform

Both experts argue that Hong Kong needs comprehensive legislative reform. For instance, the government could create new laws that would make it illegal to use someone’s photos without permission to make fake sexual images with AI, Chan suggested.

“It is a substantially pressing problem that requires careful consideration and detailed legislative reform, not quick amendments to existing provisions,” said Hargreaves. 

Hong Kong’s Chief Executive John Lee has said his administration will look at laws elsewhere to consider the best way to handle such cases. “The government will closely monitor the situation regarding the fast development and application of AI, examine global regulatory trends, and conduct in-depth research into international best practices to see what we should do,” Lee told reporters ahead of a weekly Executive Council meeting.

The privacy watchdog, the Office of the Privacy Commissioner for Personal Data, has launched a criminal investigation into the incident, but sees “no immediate need” to amend Hong Kong’s privacy laws.

However, South Korea and Taiwan have recently passed new laws that Hong Kong’s  administration could draw on. In South Korea’s notorious Nth Room case, criminals blackmailed underage girls into sharing degrading sexual content distributed via Telegram between 2018 and 2020.

South Korea uncovered another major deepfake pornography crime in 2024. Victims included teachers, underage students and female military personnel, with as many as 500 schools affected. One Telegram channel was sharing the images to nearly 220,000 members. The country changed the law in 2024 to make those who possess or view deepfake pornography liable for up to three years in prison or a fine of up to 30 million won (US$21,740).

Taiwanese lawmakers passed new legislation in 2023 that criminalizes deepfake-generated sexual imagery and strengthens protections for victims of AI-driven privacy violations. They did so after Taiwanese YouTube influencer Xiao Yu and his assistant used AI deepfake technology to superimpose the faces of at least 100 people onto pornographic content. Their victims included politicians, celebrities and influencers, whose images were sold for profit.

The British government also announced plans to criminalise the creation of sexually explicit deepfakes, making it illegal to both create and share intimate images without consent. Perpetrators could face up to two years in prison. 

Hoping for justice

While hoping for legislative changes, RainLily’s Chong has called on AI service providers to strengthen monitoring on their platforms to address nonconsensual image use. She also wants Hong Kong’s authorities to improve digital literacy education to raise awareness about the potential harm of deepfake technology.

Digital violence and sexual harassment remain pressing problems both in Hong Kong and Mainland China. In July, news broke revealing a massive Telegram group with nearly 900,000 members sharing non-consensual explicit photos and videos of women in mainland China, highlighting the growing challenge of such abuse across borders. 

READ: Overseas feminists call on Beijing to protect women from digital violence

Despite the institutional roadblocks, victims say they will not stop pursuing justice. “I feel exhausted having to think about it and look at it every day. But I can’t ignore it, because if I don’t care about it myself, basically no one else in the world will care anymore,” Audrey said.

Emily echoed her determination: “We really want people to keep talking about this issue to make social progress. As a victim, I hope my experience contributes to systemic changes.”

*Names have been changed to ensure the safety of sources

About the author

Kelly is a Hong Kong-based multimedia journalist exploring the intersection of gender, culture, and technology. Her work focuses on amplifying women’s and underrepresented voices across China and Hong Kong. Her bylines appear in Earth.Org, Radio Television Hong Kong and the South China Morning Post.

About the editor

Mary Hennock reported for the BBC from London then as Beijing correspondent for Newsweek. Her work has also appeared in The Guardian. She now works primarily as an editor on in-depth reports with a China focus.

Photo credit: Kelly Yu