Taylor Swift’s Deepfake Pornography Sparks Urgent Calls for US Legislation

New York – Fake explicit images of pop singer Taylor Swift have been circulating online, sparking renewed calls to criminalize the use of artificial intelligence (AI) to create convincing but fake explicit imagery. The images of Swift have been widely shared on social media platforms, including X and Telegram, and have been viewed by millions of people. One image hosted on X was seen 47 million times before it was taken down.

In response to the online spread of deepfake pornographic images, US politicians, including Democrat congresswoman Yvette D Clarke and Republican congressman Tom Kean Jr, have called for legislation to combat this disturbing trend. Clarke emphasized that deepfake technology has been used to target women without their consent for years, and the proliferation of AI has made it easier and cheaper to create such content.

While some individual US states have legislated against deepfakes, there is a growing push for federal action. In May 2023, Democrat congressman Joseph Morelle introduced the Preventing Deepfakes of Intimate Images Act, which aims to make it illegal to share deepfake pornography without consent. Morelle argued that such images and videos can cause significant harm, particularly to women. However, this proposed legislation is still pending.

Congressman Kean Jr echoed the need for safeguarding against deepfakes, stressing that the rapid advancement of AI technology has outpaced the establishment of necessary regulations. To address this issue, Kean Jr has co-sponsored Morelle’s bill and introduced his own legislation, the AI Labeling Act. This act would require all AI-generated content, including harmless chatbots used in customer service, to be labeled accordingly.

It is worth noting that Swift has not made a public statement regarding the explicit images. Her US publicist has not responded to requests for comment.

The spread of deepfakes is not limited to Swift or other high-profile women. A 2019 study cited in the proposed US legislation revealed that 96% of deepfake video content consisted of non-consenting pornographic material. This technology, driven by AI, poses a significant threat, enabling the creation of entirely new and highly convincing explicit images using text commands.

To address this alarming trend, the UK government passed legislation in December 2022 that made non-consensual deepfake pornography illegal. The amendment to the Online Safety Bill aimed to protect individuals from manipulated intimate photos and explicit imagery taken without consent.

The proliferation of deepfakes calls for immediate action to protect individuals, particularly women, from the emotional, financial, and reputational harm they can cause. As the technology continues to advance rapidly, it is essential to establish regulations to prevent further exploitation and abuse.