SPONSOR:

Rep. Romer & Rep. Longhurst & Sen. Townsend & Sen. Poore

Reps. Neal, Ramone, Baumbach; Sens. Hansen, Huxtable, Pettyjohn

HOUSE OF REPRESENTATIVES

152nd GENERAL ASSEMBLY

HOUSE SUBSTITUTE NO. 1

FOR

HOUSE BILL NO. 316

AN ACT TO AMEND TITLE 15 OF THE DELAWARE CODE RELATING TO DEEP FAKES IN ELECTIONS.

BE IT ENACTED BY THE GENERAL ASSEMBLY OF THE STATE OF DELAWARE:

Section 1. Amend Chapter 51, Title 15 of the Delaware Code by making deletions as shown by strike through and insertions as shown by underline as follows:

§ 5145. Use of deep fake technology to influence an election; penalty; injunctive relief.

(a) As used in this section:

(1) “Candidate” means as defined in § 8002 of this title.

(2) “Deep fake” means synthetic media that depicts a candidate or political party with the intent to injure the reputation of the candidate or party or otherwise deceive a voter, and where one or both of the following applies:

a. The synthetic media appears to a reasonable person to depict a real individual saying or doing something that did not actually occur in reality.

b. The synthetic media provides a reasonable person a fundamentally different understanding or impression of the appearance, action, or speech than a reasonable person would have from an unaltered, original version of the image, audio recording, or video recording.

(3) “Depicted individual” means an individual in a deep fake who appears to be engaging in speech or conduct in which the individual did not engage.

(4) “Synthetic media” means an image, an audio recording, or a video recording of an individual’s appearance, speech, or conduct that has been created or intentionally manipulated with the use of generative adversarial network techniques or other digital technology in a manner to create a realistic but false image, audio, or video.

(b) Except as provided in subsections (c) and (d) of this section, it is unlawful for a person to distribute a deep fake or enter into a contract or other agreement to distribute a deep fake if the person knows or reasonably should know that the item being distributed is a deep fake and the following elements are present:

(1) The distribution takes place within 90 days before an election.

(2) The distribution is made without the consent of the depicted individual.

(c) (1) It is not a violation of subsection (b) of this section if the synthetic media includes a disclosure stating: “This (image/video/audio) has been altered or artificially generated.”

(2) For visual media, the text of the disclosure must appear in a size easily readable by the average viewer and no smaller than the largest font size of other text appearing in the visual media. If the visual media is a video, the disclosure must appear for the duration of the video.

(3) For audio only media, if no visual disclosure is feasible, the disclosure must be read in a clearly spoken manner and in a pitch that can be easily understood by the average listener, at the beginning of the audio, at the end of the audio, and, if the audio is greater than 2 minutes in length, interspersed within the audio at intervals of not more than 2 minutes each.

(d) The prohibition in subsection (b) of this section does not apply to any of the following:

(1) A radio or television broadcasting station, including a cable or satellite television operator, programmer, or producer, mobile application, Internet website, or streaming platform that broadcasts a deceptive and fraudulent deepfake prohibited by this section as part of a bona fide newscast, news interview, news documentary, or on-the-spot coverage of bona fide news events, if the broadcast clearly acknowledges through content or a disclosure, in a manner that can be easily heard or read by the average listener or viewer, that there are questions about the authenticity of the materially deceptive audio or visual media, or in cases where federal law requires broadcasters to air advertisements from legally qualified candidates.

(2) A radio or television broadcasting station, including a cable or satellite television operator, programmer, or producer, mobile application, Internet website, or streaming platform when it is paid to broadcast a deceptive and fraudulent deepfake and has made a good faith effort to establish the depiction is not a deceptive and fraudulent deepfake, or in cases where federal law requires broadcasters to air advertisements from legally qualified candidates.

(3) An internet website, or a regularly published newspaper, magazine, or other periodical of general circulation, including an internet or electronic publication, that routinely carries news and commentary of general interest, and that publishes materially deceptive audio or visual media prohibited by this section, if the publication clearly states that the materially deceptive audio or visual media does not accurately represent the speech or conduct of the depicted individual.

(4) Materially deceptive audio or visual media that constitutes satire or parody.

(e) (1) This section does not restrict the ability of a person to detect, prevent, respond to, or protect against security incidents, identity theft, fraud, harassment, malicious or deceptive activity, illegal activity or to preserve the integrity or security of systems or investigate, report, or prosecute those responsible for any such action.

(2 ) This section must be construed to be consistent with the Communications Decency Act of 1996, 47 U.S.C. §  230. Nothing in this section may be construed to impose liability on an interactive computer service, as defined in the Communications Decency Act of 1996, 47 U.S.C. §  230(f)(2), for content provided by another person.

(f) (1) A violation of subsection (b) of this section is a class B misdemeanor, except as set forth in paragraphs (f)(2) and (3) of this section.

(2) A violation of subsection (b) of this section is a class A misdemeanor if the person commits the violation with the intent to cause violence or bodily harm.

(3) A violation of subsection (b) of this this section is a class E felony if a person commits the violation within 5 years of 1 or more prior convictions under this section.

(g) A candidate depicted in a deep fake in violation of subsection (b) of this section may bring an expedited action for injunctive relief and damages in the Court of Chancery. The Court may also award a prevailing party reasonable attorney’s fees and costs.

Section 2. If a provision of this Act or the application of this Act to a person or circumstance is held invalid, the provisions of this Act are severable if the invalidity does not affect the other provisions of this Act that can be given effect without the invalid provision or the application of this Act that can be given effect without the invalid application.

SYNOPSIS

This Act creates a new elections crime: “use of deep fake technology to influence an election.” Under this statute it would be a crime to distribute within 90 days of an election a “deep fake” – that is an audio or visual depiction that has been manipulated or created with generative adversarial network techniques, with the intent of harming a party or candidate or otherwise deceiving voters. It is not a crime, nor is there a penalty, if the altered media contains a disclaimer stating “This audio/video/image has been altered or artificially generated.” There is also provision for civil injunctive relief for a candidate depicted in a deep fake. There are various exceptions to protect speech, expression, and media rights. A violation of this statute is a class B misdemeanor unless the deepfake is intended to cause violence or bodily harm, in which case it is a class A misdemeanor, or if it is a repeat offense within 5 years, in which case it is a class E felony. Pursuant to § 5101 of Title 15, all offenses under this section are heard in Superior Court.

This Substitute bill differs from original House Bill No. 316 in that it adds the caveat “if no visual disclosure is feasible” in relation to disclosures for audio-only media. It adds mobile applications, Internet websites, and streaming platforms to the exceptions under (d)(1) and (d)(2). It explicitly adds a carveout related to the liability shield of Section 230 of the Communications Decency Act. It states that the Act is not intended to restrict the ability of a person to detect, prevent, respond to, or protect against security incidents, identity theft, fraud, harassment, malicious or deceptive activities or any illegal activity, preserve the integrity or security of systems or investigate, report, or prosecute those responsible for any such action.