The US Senate has proposed the DEFIANCE Act in response to the increasing spread of AI-created, nonconsensual explicit imagery, exemplified by recent deepfake incidents involving Taylor Swift. This bill aims to provide legal recourse for victims and criminalize the production and distribution of such content.
The United States Senate is currently considering the Disrupt Explicit Forged Images and Non-Consensual Edits Act of 2024, commonly known as the DEFIANCE Act. This bipartisan bill was introduced in response to the growing concern over nonconsensual, sexually explicit “deepfake” images and videos, particularly those created using artificial intelligence (AI). The introduction of this legislation was significantly propelled by recent incidents involving AI-generated explicit images of the singer Taylor Swift, which spread rapidly across social media platforms.
The DEFIANCE Act aims to provide a federal civil remedy for victims who can be identified in these “digital forgeries.” This term is defined in the legislation as visual depictions created using software, machine learning, AI, or other computer-generated means to falsely appear authentic. The Act would criminalize the creation, possession, and distribution of such nonconsensual AI-generated explicit content. It would also set a statute of limitations of ten years, starting from when the subject depicted in the non-consensual deepfake content becomes aware of the images or turns 18.
The need for such a law is underscored by a 2019 study which found that 96% of deepfake videos were non-consensual pornography, often used to exploit and harass women, particularly public figures, politicians, and celebrities. The widespread distribution of these deepfakes can lead to severe consequences for victims, including job loss, depression, and anxiety.
Currently, there is no federal law in the United States specifically addressing the rise of digitally forged pornography modeled on real people, although some states like Texas and California have their own legislation. Texas criminalizes the creation of illicit AI content, with potential jail time for offenders, while California allows victims to sue for damages.
The bill’s introduction comes at a time when the issue of online sexual exploitation, especially involving minors, is receiving significant attention. The Senate Judiciary Committee, in a hearing entitled “Big Tech and the Online Child Sexual Exploitation Crisis,” is examining the role of social media platforms in the spread of such content and the need for legislative action.
This legislative initiative highlights the growing concern over the misuse of AI technology in creating deepfake content and the need for legal frameworks to protect individuals from such exploitation and harassment
Image source: Shutterstock