Berman Introduces Bill to Prevent Spread of Malicious Deepfakes

For immediate release:

SACRAMENTO - Assemblymember Marc Berman (D-Palo Alto) has introduced legislation to criminalize the nefarious creation and distribution of deepfakes - hyper-realistic video or audio recordings of someone appearing to say or do something that the individual did not say or do.

“Deepfakes distort the truth, making it harder to distinguish real events and actions from fiction and fantasy,” said Assemblymember Berman. “While California is often the leader in advancing new technology, we cannot allow deceptive video and audio recordings to be weaponized against individual people, communities, or the public at large. Deepfakes are already being used to harass women, and they have the ability to cause further harm by inciting violence, manipulating elections, and undermining national security. Assembly Bill 602 will prevent the spread of deepfakes that are created with malicious intent.”

If enacted, Assembly Bill 602 would make it a misdemeanor to create or distribute a deepfake that is “likely to deceive any person who views the recording” or that is “likely to defame, slander, or embarrass the subject of the recording.” Modeled after California’s revenge porn law, doing so would be punishable by imprisonment in a county jail for up to one year, a fine not exceeding $2,000, or both imprisonment and a fine. Deepfakes that are “satire or parody, or that otherwise, because of content, context, or a clear disclosure, would not cause someone to believe that it is real,” would be protected under the bill.

Deepfakes are created using artificial intelligence technology that seamlessly superimposes the movements and words of one person onto another. As a result, deepfakes make it harder to combat the spread of disinformation and potentially easier to dismiss real events as fake. This free, easily accessible technology is also disproportionally being used to humiliate women by scraping photos of women’s faces from the internet and manipulating them into sexually-explicit material.

In response to the growing threats of deepfakes, the Department of Defense, through the Defense Advanced Research Projects Agency (DARPA), has tasked researchers across the country with developing ways to identify deepfakes. Researchers at the University of Colorado in Denver are currently trying to create convincing deepfakes for use by companies, such as SRI International in Menlo Park, California, to develop technology that can in turn detect deepfakes.

A similar bill, the Malicious Deep Fake Prohibition Act of 2018, introduced by Senator Ben Sasse (R-NE) expired shortly after being introduced in Congress last December. Under that bill, the creation and distribution of a deepfake that “would facilitate criminal or tortious conduct” would have been punishable by a fine or imprisonment up to two years, or up to ten years if the deepfake “could be reasonably expected to affect the conduct of any administrative, legislative, or judicial proceeding” or cause violence.


Contact: Kaitlin Curry, (916) 319-2024