Scroll Top

Fake Images, Real Consequences: The Devastating Impact of AI-Generated Child Sexual Abuse Material

Blog_Ai

This week, the New York Times published a story about the proliferation of AI-generated CSAM (child sexual abuse material) and the difficulties that law enforcement and non-profits have in properly identifying and policing the images.

The issues? AI-generated CSAM may be protected under the First Amendment since real children are not used to create the images and, the overwhelming task of monitoring and tagging these images. This is a monstrous job that falls squarely on the shoulders of the National Center for Missing and Exploited Children (NCMEC).

“Whoever perceives that robots and artificial intelligence are merely here to serve humanity, think again.” – Alex Morritt

NCMEC cannot use cloud computing to help them in the task (the computing capacity and speed of the cloud is a large part of AI)—because using the cloud for CSAM images is illegal and ripe for hacking and exploitation. As a result, the people with the job of stopping these images must work with their technological arms tied behind their backs.

How do we combat the problem? 

Understand the damage of CSAM, even if the images are AI-generated

Some First Amendment scholars have concluded that since real children are not depicted in the AI-generated CSAM, that the CSAM is a victimless crime. But let’s not forget, not too long ago, people said the same thing about real children in CSAM material. Child sexual abuse material is not benign—it depicts the sexual abuse, torture, and assault of innocent children.

How dangerous is CSAM? A 2022 study found that 42% of people who viewed online CSAM sought sexual contact with children. 58%  of respondents in the same study said that viewing the materials made them more likely to commit the abuse in person. Another study found that of those viewing CSAM, 70% were exposed when they were under the age of 18.

It didn’t matter if the images were AI-generated or not—the result was the same. Viewing the images made the viewer more likely to seek out children in person to abuse. These images are easily available, and younger people are exposed to the harm that they can cause.

Demand robust and effective legislation to outlaw AI-generated CSAM

Now that we know the damage and risk that AI-generated images can pose, we need to reach out to legislators and demand new laws that will stop these AI images from being created. In response to those who say that this is a First Amendment issue, we say: The First Amendment does not protect your right to shout “FIRE!” in a crowded theater, because of the risk of injury it would cause. The risk of injury and damage from AI-generated CSAM is far greater.

Ask for increased funding for nonprofits and agencies who work to stop CSAM

Do your research into the nonprofits that help combat CSAM online. Organizations such as NCMEC and Zero Abuse Project are on the cutting edge of technology to stop these crimes, in coordination with ICAC, the Internet Crimes Against Children Task Force. NCMEC and Zero Abuse need donations to continue this work. ICAC needs funding to assist in the fight against CSAM. These organizations cannot do the work on their own.

The problem is huge, but not impossible. We all have a role to play in making sure that CSAM, whether it’s generated by AI or depicts actual crimes committed against children, is stopped.