Several Los Angeles-area school districts are investigating reports of "inappropriate" artificial intelligence-generated images of students that are being spread online and in text messages.
In February, middle school students told Beverly Hills school administrators that inappropriate images were going around Beverly Vista Middle School.
"We want to make it unequivocally clear that this behavior is unacceptable and does not reflect the values of our school community," the district said in a statement to Fox News at the time.
"Although we are aware of similar situations occurring all over the nation, we must act now. This behavior rises to a level that requires the entire community to work in partnership to ensure it stops immediately," it read.
Principal Jason Allemann from Dana Hills High School told the outlet that a letter was sent to parents notifying them of AI-generated nude images circulating online.
Los Angeles Unified School District (LAUSD) also announced in a statement to Fox News that they're looking into the "allegations of inappropriate photos being created and disseminated within the Fairfax High School community."
"These allegations are taken seriously, do not reflect the values of the Los Angeles unified community, and will result in appropriate disciplinary action if warranted," the district said.
Chief parent officer at social media safety company Bark Technologies, Titania Jordan, told Fox News that the recent incident "is indicative of a larger problem affecting society: the use of AI for malicious purposes."
"Deepfakes-and specifically shared, fabricated, non-consensual intimate images and videos-aren't just like fun TikTok or Snapchat filters. These deceptively realistic media can have devastating real-life consequences for the victims who did not consent for their likeness to be used," she said.
"It's not just the potential harm from fake nudes, either; deepfake technology can also be used in scams, heists, and even to influence political behavior," Jordan continued.
The school districts highlighted that the misuse of AI in such incidents is technically not considered a crime, as the laws have yet to catch up with the technology.