Source: Sacramento Bee
Increasingly, AI and political deepfakes are impacting elections around the world. In 2022, Russian hackers created an AI-manipulated video showing Ukrainian President Volodymyr Zelenskyy ordering his forces to surrender. In Taiwan in 2023, fake audio was created and disseminated of a presidential candidate endorsing his opponents, which he never did. In Northern Ireland in 2022, an AI-generated video went viral depicting Cara Hunter, a candidate for the Northern Ireland Assembly, having explicit sex; Hunter prevailed in her race but later called it “the most horrific and stressful time of my entire life.”
AI-powered disinformation is not new. And we can expect more — and much worse — as our general 2024 election heats up.
That’s why I, Asm. Marc Berman, co-authored Assembly Bill 2655, the Defending Democracy from Deepfake Deception Act of 2024. The bill would require the largest online platforms, during a tight window around Election Day, to block the posting of limited content they know to be materially deceptive election deepfakes, including content that targets candidates, election officials and poll workers. Outside of that tight window around an election, online platforms would have to label AI election disinformation as what it is: fake.
These protections would apply to all elections occurring in California. Now, the bill has passed both the senate and assembly and awaits Gov. Gavin Newsom’s signature. AI deepfakes have infiltrated our political discourse and social media streams, making it nearly impossible for many voters to know what images, audio or video they can trust. Powerful, easy-to-access tools are newly available to candidates, conspiracy theorists, foreign states and online trolls who want to deceive voters and undermine trust in our elections. Any conspiracy theorist can now create fake evidence of an allegedly rigged election with just a few clicks and a few dollars.
Any agent of chaos now has powerful new tools at their disposal that require little to no expertise. While deepfakes impacting candidates are a serious danger, just as problematic are deepfakes that target the trust in and integrity of our election systems. Imagine false audio of your county Registrar of Voters “caught on tape” saying that their voting machines have been hacked. Or a fake news website that looks like a local newspaper (the Russians have already set up a network of these) carrying an AI-written news story about poll workers arrested for accepting bribes, accompanied by incriminating security video. All of this is simple to create using current AI tools. And there’s a particular threat for voters already on the edges of our democracy: For centuries, people have tried to disenfranchise marginalized voters and make it harder for them to exercise their right to vote.
Using AI, there are now new, crafty ways to specifically target communities of color, immigrant communities, young voters, and older voters with disinformation. As just one of many examples, see the many online deepfakes of former President Trump hanging out with excited but very fake Black people, designed to create a false narrative in the Black community. The California public is demanding solutions: A November 2023 polling by Berkeley IGS showed 84% of California voters are concerned about digital threats to elections, and 73% think state government has a “responsibility” to take action. The Defending Democracy from Deepfake Deception Act of 2024 puts the responsibility on the online platforms. If they are going to host and profit from our democratic discourse, they should also have very minimal, basic responsibilities to maintain it.
The responsibility to help protect our democracy requires only reasonable additional efforts by the online platforms since they are already required to review posting to protect us from child sexual abuse material. Because AB 2655 is narrowly tailored to serve a compelling government interest — protecting our elections from disinformation designed to defraud our electorate and undermine faith in our democracy — it does not violate the First Amendment. Our presidential election may be so close that one political deepfake on the eve of Election Day swings a few thousand votes, enough to change the outcome. Political deepfakes may also target our local elections, where one or two votes can be the difference. AB 2655 is an assertive, front-end solution that keeps AI disinformation out of our democratic discourse for a short, critical period around our elections.
Asm. Marc Berman represents the 23rd Assembly District, encompassing parts of the San Francisco Peninsula and Silicon Valley. Jonathan Mehta Stein is the executive director of California Common Cause and the co-founder and Board Chair of the California Initiative for Technology and Democracy.