Last week, the internet was ablaze with explicit, seemingly AI-generated deepfake pornography featuring the likeness of Taylor Swift. These nonconsensual images spread rapidly, sparking widespread outrage and prompting urgent discussions about the need for federal protection against AI abuse. The incident has thrust the potential dangers of Artificial Intelligence into the spotlight, demanding attention from both the public and lawmakers.
The fabricated images of Swift were widely circulated across social media platforms, triggering immediate condemnation from her fanbase, expressions of “alarm” from the White House, and renewed calls for legislative action from figures like Rep. Joe Morelle. Morelle, among others, is pushing for federal legislation to criminalize the nonconsensual sharing of digitally altered explicit images, proposing penalties including jail time and substantial fines.
One particularly impactful post on X (formerly Twitter) showcasing screenshots of these fabricated images reportedly garnered over 47 million views before the account was suspended. X took further action by temporarily blocking all searches for Taylor Swift on the platform until Tuesday in an attempt to curb the spread.
Alt text: News report header about Taylor Swift AI deepfakes incident.
Searches for Swift’s name on X have since been reinstated. Joe Benarroch, X’s head of business operations, stated that while searches are back online, the platform remains vigilant and committed to removing any further attempts to disseminate the harmful content.
Despite X’s efforts, the images have continued to surface on other social media sites and online platforms, highlighting the pervasive challenge of controlling the spread of deepfake content once it’s released. White House Press Secretary Karine Jean-Pierre described the situation as “alarming,” echoing the broader concern about the rapid proliferation of these deceptive images.
In response to growing concerns about AI abuse, a bipartisan group of U.S. House lawmakers introduced the No Artificial Intelligence Fake Replicas And Unauthorized Duplications (No AI FRAUD) Act on January 10th. This act is intended to establish a federal framework to safeguard individuals’ rights to their likeness and voice against AI-generated forgeries.
Rep. María Elvira Salazar, a lead sponsor of the bill, emphasized the direct link between the legislation and incidents like the Taylor Swift deepfakes. She stated, “What happened to Taylor Swift is a clear example of AI abuse. My bill, the No AI FRAUD Act, will punish bad actors using generative AI to hurt others — celebrity or not. Everyone should be entitled to their own image and voice and my bill seeks to protect that right.”
The rise of user-friendly AI tools has democratized the creation of deepfakes. Once a complex technical undertaking, generating fabricated images, videos, and audio is now readily accessible through apps and websites like ChatGPT, fueling a burgeoning online industry. This ease of access, while offering creative potential, also carries the risk of misuse, as seen in the case of deepfake pornography targeting figures like Taylor Swift, often categorized as image-based sexual abuse.
Alt text: Text excerpt highlighting concerns about AI technology and cyber civil rights organizations.
The No AI FRAUD Act: Key Provisions
The No AI FRAUD Act proposes to establish federal jurisdiction to:
- Protect Likeness and Voice: Reaffirm the protection of individual likeness and voice, granting control over their use.
- Empower Individuals: Enable individuals to take action against those who create and distribute AI deepfakes without consent.
- Balance Rights: Ensure these protections are balanced against First Amendment rights related to free speech and technological innovation.
Rep. Madeleine Dean expressed her solidarity with Taylor Swift and other victims of AI deepfakes, emphasizing the vulnerability of all individuals in the face of this technology. “My thoughts are with Taylor Swift during this immensely distressing time. And my thoughts are with every other person who has been victimized by harmful AI deepfakes,” Dean stated. “If this deeply disturbing privacy violation could happen to Taylor Swift — TIME’s 2023 person of the year — it is unimaginable to think how helpless other vulnerable women and children must also feel.”
Dean further emphasized the urgency of legislative action, stating, “at a time of rapidly evolving AI, it is critical that Congress creates protections against harmful AI. My and Rep. Maria Salazar’s No AI FRAUD Act is intended to target the MOST harmful kinds of AI deepfakes by giving victims like Taylor Swift a chance to fight back in civil court.”
Rep. Morelle hopes the incident involving Taylor Swift will galvanize support for the No AI FRAUD Act. His spokesperson noted the bill’s potential to address Swift’s situation with both criminal and civil penalties, suggesting the high-profile nature of this case could be a catalyst for legislative progress.
Alt text: Text excerpt detailing reactions to Taylor Swift AI deepfakes from fans, White House, and lawmakers.
State-Level Responses and the Need for Federal Action
While 17 states have enacted 29 bills related to AI regulation since 2019, the patchwork nature of these laws leaves gaps in protection, particularly concerning deepfake pornography. State laws vary significantly, and some may not adequately address the nuances of AI-generated image abuse.
Using Taylor Swift’s residences as an example, the inconsistencies become apparent. While New York offers criminal and civil recourse for deepfake victims, including a ban on distributing AI-generated pornographic images without consent, Tennessee lacked explicit laws against deepfake porn until recently. California passed a law in 2020 allowing victims of nonconsensual deepfake pornography to sue perpetrators, but the legal landscape remains complex and varied across states.
Gov. Bill Lee of Tennessee recently proposed the ELVIS Act to strengthen AI protection in the state, highlighting the ongoing evolution of state-level responses. Rep. Morelle emphasized the necessity of federal legislation to provide consistent and comprehensive protection against AI abuse, stating, “Now it is apparent we must take immediate action to stop the abuse of AI technology by providing a federal law to empower individuals being victimized, and end AI FRAUD.”
Taylor Swift has not yet made a public statement regarding the AI deepfake images. The incident, however, serves as a stark reminder of the potential harms of AI misuse and the urgent need for robust legal and ethical frameworks to govern this rapidly advancing technology.