Supporters of the blocked legislation argued the measures were critical to protecting democracy. In a fact sheet promoting AB 2839, lawmakers warned California was heading into its first “generative AI election,” where disinformation could spread faster and more convincingly than ever before.
They cautioned that in just a few clicks, malicious actors could create fabricated videos of candidates taking bribes, fake recordings of election officials questioning voting machine security, or even robocalls in a governor’s voice misdirecting voters about polling sites. The document noted that conspiracy theorists, foreign governments, and even political candidates themselves were already distributing manipulated content across the globe.
The legislation sought to address these risks by prohibiting the distribution of AI-generated political content that misrepresented election procedures, candidates, or officials during the 120-day period leading up to an election, and for 60 days afterward in cases involving voting systems. It also required candidates who used AI to depict themselves saying or doing things they had not to label such content as manipulated.
Lawmakers framed the proposals as temporary, narrowly tailored steps intended to preserve election integrity while remaining consistent with the First Amendment. They also included provisions allowing for quick legal action to halt violations.
With Judge Mendez’s ruling, California’s effort to limit deepfakes and AI-driven disinformation in campaigns has been effectively blocked, setting the stage for broader national debates on how—or if—AI should be regulated in the political arena.