Skip to main content

Bill aims to crack down on AI-generated child pornography - Palo Alto Online

Assembly Bill 1831 would eliminate distinction between real and fake when prosecuting creators of explicit videos with children

Source: Palo Alto Online, Gennady Sheyner

As artificial intelligence increasingly blurs the distinction between real and fake, state Assembly member Marc Berman, D-Menlo Park, plans to introduce a law that would make it illegal to use AI technology to create child pornography.

Known as Assembly Bill 1831, the new bill would criminalize the creation, distribution and possession of child pornography that is generated by artificial intelligence. It expands the definition of "obscene" in state code to include "representations of real or fictitious persons generated through use of artificially intelligent software or computer-generated mean, who are, or who a reasonable person would regard as being real persons under 18 years of age, engaging in or simulating sexual content," according to the bill.

Berman announced his legislation at around the same time that Sen. Josh Becker, D-Menlo Park, proposed a bill that would require tech companies that create AI-generated images, videos and audios to include in their content watermarks – digital patterns that identify the content as machine-made.

Both bills aim to create safeguards to protect consumers from the shady sides of generative AI. But while Becker's legislation aims to make it easier to separate the real from the fake, Berman's would dissolve that distinction when dealing with child pornography.

Berman said in an interview that he was encouraged to take up the cause by law enforcement officials, including the California District Attorney Association. He cited a 2020 case in Ventura County in which three people were arrested and charged with distributing made-to-order sexually explicit images of children. They allegedly created these images using artificial intelligence but were released because there are no laws against the practice.

"Despite the fact that these were obscene images that depict young children, law enforcement wasn't able to prosecute them because it's artificial intelligence," Berman said.

The problem has only grown since then. An October 2023 report from the British research group, Internet Watch Foundation analyzed 20,254 images generated by AI and posted on a dark web forum and found 11,108 as likely to be criminal. Ultimately, the nonprofit group assessed 2,562 images as "criminal pseudophotographs" and 416 as "criminal prohibited images" under U.K. child protection laws.

The report noted that one of the defining features of AI is its "potential for rapid growth." The IWF report also concluded that there is now reasonable evidence that AI-generated child sexually explicit material has "increased the potential for the re-victimisation of known child sexual abuse victims, as well as for the victimisation of famous children and children known to perpetrators."

"The IWF has found many examples of AI-generated images featuring known victims and famous children," the report stated.

Berman also argued that even if the distributed images are fake, they could have a damaging effect in the real world by encouraging consumers to commit physical crimes against children.

"It would be bad enough if it's just happening in terms of imagery online. Research shows this can lead to physical sexual offenses against children," he said.

While Berman does not expect the bill to get to the Assembly floor until March, the proposal has already assembled a diverse cast of supporters that includes law enforcement officials, internet watchdogs and the media workers of Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA), a labor union.

Ventura County District Attorney Erik Nasarenko, co-sponsor of the bill, said the legislation "sends a clear message that our society will not tolerate the malicious use of artificial intelligence to produce harmful sexual content involving minors." Jodi Long, president of SAG-AFTRA Los Angeles Local, said in a statement that her organization is "deeply concerned by the threat of computer-generated and artificial intelligence generated child sexual abuse material, and this legislation is an important step to preventing these dangerous practices."

"Safeguarding our children from potential abusive or exploitative practices is imperative, and as new technologies present new challenges, we must do everything we can to ensure their safety in an ever-changing world," Long said.

James Steyer, founder and CEO of Common Sense Media, a nonprofit that reviews media content and provides guidelines on suitability for children, said the bill builds on AB 1394, a bill that was signed into law last year to target online child sex trafficking. That bill, sponsored by state Assemblymember Buffy Wicks, D-Oakland, required social media companies to provide a platform for people to report child pornography and gives them between 30 and 60 days to verify the content and block it from reappearing.

"This new bill employs a similarly proactive approach, this time protecting kids and teens against online exploitation that is exacerbated by the rise of AI," Steyer said of Berman's AB 1381 in a statement. "California should take the lead when it comes to protecting kids and families from the negative impacts of this powerful new technology."

That said, Berman said he still has some work to do to convince his colleagues in the Legislature the bill should become law. One colleague recently speculated that the bill's provisions may conflict with free speech rights. Berman said he strongly rejects that argument ("Even free speech has limitations," he said) and disputed the notion that because these images aren't real, they should not be considered criminal.

"The sexual exploitation of children must be illegal, full stop," Berman said in a statement. "It should not matter that the images were generated by AI, which is being used to create child sexual abuse material that is virtually indistinguishable from a real child."

The new legislation follows two prior Berman bills that target the dark side of AI. In 2019, he authored AB 730, which made it illegal to distribute "deepfake" videos, photos and audio featuring politicians within 60 days of an election. This followed a widely circulated fake video of a seemingly inebriated Rep. Nancy Pelosi.

Another Berman bill, AB 602, made it illegal to distribute deepfake pornography without the consent of the individual being depicted in these videos. That bill, like AB 1831, was strongly supported by SAG-AFTRA.


Gennady Sheyner, Palo Alto Online

Jan 18, 2024