← Back
US / Culture

AI Deepfake Cyberbullying Crisis Exposes School Accountability Gaps

A RAND survey finds one in five secondary school principals has dealt with deepfake bullying, but most staff have received no training on how to respond.

AI Deepfake Cyberbullying Crisis Exposes School Accountability Gaps

By Negotiate the Future

3/17/26

Thirteen percent of K–12 principals reported at least one incident of bullying involving AI-generated deepfakes during the 2023–2024 and 2024–2025 school years, according to a RAND Corporation survey published in late 2025. The rate was higher in secondary schools: 22 percent of high school principals and 20 percent of middle school principals reported cases, compared with 8 percent at the elementary level.

The scale of the underlying problem dwarfs those numbers. The National Center for Missing and Exploited Children recorded 440,000 reports of AI-generated child sexual abuse material in the first half of 2025 alone, up from 4,700 in all of 2023. Yet more than two-thirds of school staff surveyed by RAND said they had received no training on deepfakes or rated what they received as poor or mediocre. The training gap leaves administrators improvising responses to incidents that carry potential criminal liability, Title IX implications, and lasting psychological harm for targets.

A case in Louisiana illustrates the pattern.

Several middle school boys used AI tools to generate pornographic images of eight female classmates and circulated them among peers. When one of the girls confronted a boy she accused of creating the images and punched him on a school bus, she was expelled. Two boys were eventually charged, but the incident drew scrutiny for punishing a victim, guilty of a different infraction, before holding perpetrators accountable.

Louisiana has since moved to close its legal gaps. Sen. Regina Barrow introduced SB 346, which would prohibit the use of deepfake material against K–12 students, and SB 347, which would add unlawful deepfakes to the definition of power-based violence under the state’s Campus Accountability and Safety Act.

As of January 2026, 46 states had enacted laws addressing AI-generated explicit imagery, but enforcement mechanisms and school-level protocols remain uneven. The RAND authors recommend integrating AI-driven detection tools, expanding digital literacy curricula, and updating cyberbullying policies to address synthetic media specifically. Whether districts act on those recommendations before the next incident may determine how much further the accountability gap widens.

More from NtF

Continue reading

Stay Informed

Stay Informed

AI Deepfake Cyberbullying Crisis Exposes School Accountability Gaps | Negotiate The Future