An investigation by Wired suggests the growing problem of AI-generated fake nude images in schools is far more widespread than many people realize. Reporter Matt Burgess, working with the digital-deception research group Indicator, examined how artificial-intelligence “nudify” tools are being used by students—primarily boys—to create fabricated sexual images of classmates in just a few clicks. These images often appear realistic enough to embarrass or humiliate victims, and once created they can spread rapidly online or through messaging apps.
The investigation reviewed publicly documented incidents and found cases connected to around 90 schools in at least 28 countries since 2023. In those reported situations alone, more than 600 victims were identified. Researchers emphasize that those numbers likely represent only a portion of the true scale of the issue, since many incidents are never formally reported.
Lloyd Richardson of the Canadian Centre for Child Protection told Wired that the problem has become so common that it would be difficult to find a school untouched by it. Unlike earlier forms of harassment that often required technical skills or access to specialized online communities, today’s AI tools are widely available and simple to use. Students can generate manipulated images using ordinary apps that anyone can download.
The technology has advanced faster than schools and law enforcement agencies have been able to adapt. As a result, disciplinary and legal responses vary widely. In some situations, students responsible for creating or sharing the images have faced felony charges related to child sexual abuse material. In other cases, schools have issued suspensions or taken little formal action.
Students targeted by the fake images report significant emotional harm, including embarrassment, anxiety, and concern that the fabricated pictures could continue circulating indefinitely. In response, some schools have begun taking precautionary steps, such as allowing students to opt out of having their photos included in yearbooks to reduce the chances that their images could be used to create deepfakes.

