AI-generated child pornography can overwhelm the tip line

A new wave of AI-created child pornography threatens to overwhelm authorities already held back by antiquated technology and laws, according to a new report released Monday by Stanford University's Internet Observatory.

Over the past year, new artificial intelligence technologies have made it easier for criminals to create explicit images of children. Now, Stanford researchers warn that the National Center for Missing and Exploited Children, a nonprofit that serves as the central coordinating agency and receives the majority of its funding from the federal government, lacks the resources to combat the growing threat .

The organization's CyberTipline, created in 1998, is the federal clearinghouse for all online reports of child pornography, or CSAM, and is used by law enforcement to investigate crimes. But many of the suggestions received are incomplete or full of inaccuracies. Even its small staff struggled to keep up with the volume.

“Almost certainly in the years to come, the CyberTipline will be flooded with highly realistic-looking AI content, making it even more difficult for law enforcement to identify real children who need saving,” said Shelby Grossman, one of the authors of the report.

The National Center for Missing and Exploited Children is at the forefront of a new battle against AI-created sexual exploitation images, an emerging area of ​​crime still being defined by lawmakers and law enforcement. order. Already, amid the epidemic of AI-generated deepfake nudes circulating in schools, some lawmakers are taking steps to ensure such content is deemed illegal.

AI-generated images of child pornography are illegal if they contain real children or if images of real children are used to train the data, researchers say. But synthetically made ones that contain no real images could be protected as free speech, according to one of the report's authors.

Public outrage over the proliferation of online images of child sexual abuse erupted at a recent hearing with the CEOs of Meta, Snap, TikTok, Discord and kids online.

The Center for Missing and Exploited Children, which takes tips from individuals and companies like Facebook and Google, has supported legislation to increase its funding and give it access to more technology. Stanford researchers said the organization provided access to employee interviews and its systems for the report to show system vulnerabilities that needed updating.

“Over the years, the complexity of reporting and the severity of crimes against children continue to evolve,” the organization said in a statement. “Therefore, leveraging emerging technological solutions throughout the CyberTipline process leads to protecting more children and holding offenders accountable.”

Stanford researchers found that the organization needed to change the way its reporting line worked to ensure that law enforcement could determine which reports involved AI-generated content, as well as ensuring that companies reporting potentially illicit material on their platforms complete the forms completely.

Less than half of all reports submitted to the CyberTipline were “actionable” in 2022 because the companies reporting the abuse did not provide sufficient information or because the image in a report spread quickly online and was reported too many times . The suggestion row has an option to check if the suggestion content is a potential meme, but many don't use it.

On a single day earlier this year, a record one million reports of child pornography flooded the federal clearinghouse. For weeks, investigators have been working to respond to the unusual spike. Many of the reports turned out to be related to an image in a meme that people were sharing across platforms to express outrage, not malicious intent. But it still consumed significant investigative resources.

This trend will get worse as AI-generated content accelerates, said Alex Stamos, one of the authors of the Stanford report.

“A million identical images is hard enough, a million separate images created by AI would break them,” Stamos said.

The Center for Missing and Exploited Children and its contractors cannot use cloud computing providers and are required to store images locally on computers. According to the researchers, that requirement makes it difficult to build and operate the specialized hardware used to create and train AI models for their investigations.

Your organization typically does not have the technology to widely use facial recognition software to identify victims and offenders. Much of the processing of reports is still manual.

Leave a Reply

Your email address will not be published. Required fields are marked *