Judge Blocks California Laws Restricting AI in Political Campaigns
A federal judge has struck down two California laws aimed at curbing the use of artificial intelligence in political campaigns, ruling they violate free speech protections and conflict with federal law. Senior U.S. District Judge John Mendez ruled Friday against Assembly Bills 2839 and 2655, halting California’s attempt to become the first state to regulate AI-driven political content ahead of the 2026 elections.
AB 2839, which banned AI-generated “disinformation and deepfakes” in political ads 120 days before an election, was challenged by online creators and the satirical site The Babylon Bee. Judge Mendez wrote that while manipulated media poses real risks, “California cannot preemptively sterilize political content.” He emphasized that even harmful speech cannot be broadly censored by the state without violating constitutional protections.
AB 2655 required online platforms to remove such AI-generated content, but Mendez previously ruled it conflicted with Section 230 of the Communications Decency Act, which shields platforms like X and Rumble from liability for third-party content. Both platforms joined the legal challenge, arguing the law placed an unconstitutional burden on them.
Governor Gavin Newsom signed both bills in 2024, warning that generative AI could severely undermine public trust in democratic institutions. Supporters of the laws cited examples of fake robocalls, falsified videos of election officials, and manipulated images designed to deceive voters as reasons for urgent regulation.
Despite these concerns, the court deemed the laws too broad and legally flawed. The ruling represents a major roadblock for states seeking to regulate AI in elections and underscores the tension between emerging technology, free speech rights, and efforts to preserve electoral integrity.