Skip to main content

California launches new broadside against tech over harmful AI content - Politico

The bill goes after the creators and distributors of AI-generated depictions of child sexual abuse.

Source: Politico

SACRAMENTO, Calif. — A state lawmaker from Silicon Valley wants to crack down on AI-generated depictions of child sexual abuse as tech companies face growing scrutiny nationally over their moderation of illicit content.

A new bill from Democratic Assemblymember Marc Berman, first reported in California Playbook, would update the state’s penal code to criminalize the production, distribution or possession of such material, even if it’s fictitious. Among the backers is Common Sense Media, the nonprofit founded by Jim Steyer that for years has advocated for cyber protections for children and their privacy.

The legislation has the potential to open up a new avenue of complaints against social media companies, who are already battling criticisms that they don’t do enough to eradicate harmful material from their websites. It’s one of at least a dozen proposals California lawmakers will consider this year to set limits on artificial intelligence.

Berman’s bill builds on a bipartisan law signed by Gov. Gavin Newsom last year that requires social media platforms to do more to combat child sexual abuse material — and allows victims to sue the companies for deploying features that led to commercial sexual exploitation.

That bill passed despite opposition from the California Chamber of Commerce and a coalition of tech groups including Technet and NetChoice, which represent companies like Google, Pinterest, TikTok and Meta, the parent company of Instagram and Facebook.

Those tech groups argued the law could inadvertently harm kids by creating a chilling effect in online spaces.

Berman’s bill goes after the creators and distributors of the AI images and doesn’t explicitly target the platforms, but the troubling trend in the use of AI could create more headaches for the tech industry. In just one quarter last year, Meta sent 7.6 million reports of child sexual abuse material to the National Center for Missing and Exploited Children.

AI-generated content depicting minors still relies on scraping information and images from real sexual abuse material and can lead to real-life abuse of children, Berman said. Some law enforcement agencies in California have already encountered the material, he added, but have been unable to prosecute people because it is digitally-manufactured.

“You could argue that every AI-generated image actually victimizes thousands of real children,” Berman said. “Because they are a part of the formula that goes into creating that AI-generated image.”

Federal and state lawmakers have already raised alarms about the alleged failure of social media companies to remove sexually-explicit content of minors from their websites. The Senate Judiciary Committee recently subpoenaed the CEOs of X (formerly known as Twitter), Snap and Discord to testify at an upcoming hearing on the sexual exploitation of children online.

And New Mexico Attorney General Raúl Torrez recently sued Meta over claims Instagram and Facebook proactively served sexually explicit images to kids and allowed human trafficking of minors.

California, often a leader in tech regulation, is joining states like Pennsylvania and Oklahoma that are considering similar bills related to AI-generated sexual exploitation.

Berman said there could be more action from the Legislature in the future aimed at reducing online exploitation of children.

“The first step is we have to make sure that the images are illegal,” he said, adding that California needs to do “much more” to hold every actor accountable, including tech companies and platforms.

By LARA KORTE