SACRAMENTO – Today Governor Newsom signed Assembly Bill 730, authored by Assemblymember Marc Berman (D-Palo Alto), which will help shield voters from misinformation about candidates before the 2020 election.
“Voters have a right to know when video, audio, and images that they are being shown, to try to influence their vote in an upcoming election, have been manipulated and do not represent reality,” said Assemblymember Berman, Chair of the Assembly Elections and Redistricting Committee. “In the context of elections, the ability to attribute speech or conduct to a candidate that is false – that never happened – makes deepfake technology a powerful and dangerous new tool in the arsenal of those who want to wage misinformation campaigns to confuse voters. In the face of total inaction at the federal level, California must step up to protect our more than 20 million voters. AB 730 will help deter nefarious deepfakes by holding accountable the bad actors who intentionally attempt to injure a candidate’s reputation or deceive voters into believing a candidate said or did something they never said or did.”
AB 730 prohibits a person, committee, or entity, within 60 days of an election, from distributing, with actual malice, materially deceptive audio or visual media of a candidate with the intent to injure the candidate’s reputation or to deceive a voter into voting for or against the candidate, unless the media includes a disclosure stating that it has been manipulated. The bill authorizes a candidate to seek injunctive or other equitable relief and general or special damages, if the media does not include the required disclosure.
Earlier this year, the manipulation of a video of House Speaker Nancy Pelosi, whose speech was altered to make it seem as though she was inebriated, was very low quality and clearly fake, yet it was still viewed over three million times online. Deepfakes, which are hyper-realistic and therefore more challenging to distinguish from legitimate media, have the potential to be even more disruptive. By blurring truth and fiction, deepfakes make it easier to pass off fake events as real and to dismiss real events as fake – a phenomenon dubbed “the liar’s dividend.”
The Carnegie Endowment for International Peace notes that deepfakes have the potential to incite violence, alter election outcomes, and undermine diplomacy. As reported by The Washington Post, “a video of Gabon’s long-unseen president Ali Bongo, who was believed in poor health or already dead, was decried as a deepfake by his political opponents and cited as the trigger, a week later, for an unsuccessful coup by the Gabonese military.”
In January, then US Director of National Intelligence Dan Coats warned that deepfakes will probably be among the tactics used to disrupt the 2020 election. Earlier this year, the US House of Representatives Permanent Select Committee on Intelligence, led by Congressman Adam Schiff, convened a hearing on the National Security Challenge of Artificial Intelligence, Manipulated Media, and “Deepfakes,” noting that “deepfakes raise profound questions about national security and democratic governances, with individuals and voters no longer able to trust their own eyes or ears when assessing the authenticity of what they see on their screens.” Senator Marco Rubio has also said that he expects deepfakes to be used in “the next wave of attacks against American and Western Democracies.”
In anticipation of the expanded use of this technology, UC Berkeley professor and image-forensics expert Hany Farid is developing a system to detect deepfakes of Donald Trump and the 2020 democratic presidential candidates. Additionally, last month, a new law in Texas went into effect, making it a misdemeanor, punishable by up to a year in jail and/or a fine of up to $4,000, to create and distribute a deepfake video within 30 days of an election with the intent to injure a candidate or to influence the result of an election.
Contact: Kaitlin Curry, (916) 319-2024