Overview:
California Governor Gavin Newsom recently signed three new laws targeting the use of AI-generated deepfakes in political campaigns. These laws were fast-tracked in response to a viral parody ad of Vice President Kamala Harris, sparking debate about free speech and the regulation of digital content. Critics argue the laws prioritize political protection over fairness and free expression.
Why It Matters: These laws impact the balance between free speech and digital content regulation, posing potential threats to open dialogue in America.
Who It Impacts: This will impact social media users, meme creators, political commentators, and anyone engaging in online political discourse.
In late July, a parody campaign ad mocking Vice President Kamala Harris went viral on X (formerly Twitter). The satirical video, posted by an anonymous user, garnered over 60 million views. Despite a clear “PARODY” label, its accurate AI-generated voice and sharp criticism struck a chord with many viewers, especially on the right. Not everyone was amused—most notably California Governor Gavin Newsom, who vowed to take swift action to outlaw such AI manipulations in political content.
Just two days after the video was posted, Newsom promised to sign legislation making AI-driven manipulation in ads illegal. On Tuesday, he followed through, enacting three separate laws aimed at limiting deepfake technology in elections. His actions drew sharp criticism from free speech advocates and digital content creators, who accused the governor of pushing politically motivated censorship. As one critic pointed out, California is notoriously slow when it comes to addressing pressing issues, yet it moved swiftly to regulate memes that threaten Democratic figures.
The laws signed by Newsom are a direct response to what many Democrats saw as a damaging depiction of Harris. The first law focuses on banning “deceptive” deepfakes in political campaigns, mandating that any AI-altered content must include clear labeling. The second law goes further, requiring platforms to remove such content within 72 hours if flagged, or face legal consequences. The third law allows users to sue for damages if they feel harmed by deepfake content.
Supporters of the new laws argue they are necessary to maintain the integrity of elections, as deepfake technology becomes increasingly sophisticated and more accessible. However, critics warn that the laws’ vague definitions of what constitutes “deceptive” content open the door to potential abuse. According to the legislation, “materially deceptive” content is any AI-altered audio or visual media that would appear to a reasonable person as authentic. Many fear this subjective standard could be used to silence legitimate political speech or satire.
The irony, as many have noted, is that the Kamala Harris campaign itself has been accused of sharing misleading content online, with minimal consequences. In contrast, regular citizens and meme creators now face the threat of legal action if their content crosses the ill-defined line of being “deceptive.” One prominent example is the case of Douglas Mackey, who was sentenced to seven months in prison for posting memes that mocked Hillary Clinton voters during the 2016 election.
This latest move by Newsom has sparked renewed debate over the role of AI in politics and free speech. Many warn that the overregulation of AI could stifle innovation and harm the U.S. economy, while countries like China continue to advance in this field. Newsom himself previously voiced concerns over heavy-handed AI regulation but seemed willing to make an exception when the political stakes involved protecting prominent Democrats.
The larger issue at hand is whether or not political leaders should be entrusted with determining what constitutes deceptive content, especially in a world where political truth is often subjective. By rushing these bills through, California lawmakers may have sacrificed a critical opportunity to have an open, productive conversation about how to balance free speech with the risks posed by emerging technologies like AI.
These laws have broader implications for American democracy and free speech, especially in an era where social media plays a pivotal role in shaping political discourse. Rather than fostering open dialogue, such laws threaten to create an atmosphere of fear, where satire, criticism, and even legitimate political commentary could be stifled under the guise of protecting the public from deception.