top of page

AI Regulation (Taylor’s Version)

Anshumi Jhaveri 

Edited by Zac Krause and Vedanth Ramabhadran


Content warning: mentions of sexual assault and rape.

AI, or Artificial Intelligence, has taken the world by storm. From writing essays on any topic to creating art, there seem to be few things AI can’t do. AI is a technology that enables computers and machines to simulate human intelligence and problem-solving capabilities. It rose to fame with the launch of ChatGPT in late 2022, a chatbot that can accomplish various tasks through the use of dialogue and commands [1]. AI is a rapidly changing technology that is improving our world in many different ways. The rise of AI, however, has brought with it an array of ethical concerns, from the threat of misinformation to the elimination of jobs. It took an incident with the world’s most famous popstar for much of the world to be made aware of the threats posed by AI.

In January 2024, explicit images of world-renowned popstar Taylor Swift were created by artificial intelligence and spread all over the Internet. They initially originated on a popular social media platform called “4Chan,” an anonymous image-based bulletin board where anyone can share images and comments. 4Chan is a message board notorious for sharing hate speech, conspiracy theories, and increasingly racist and profoundly offensive content. [2] Recently, much of this content has been created through the use of various artificial intelligence softwares. The images of Swift were created and shared as part of some game or challenge going viral on the platform where people tried to see if they could create violent and vulgar images of famous women in an attempt to bypass various safeguards and restrictions. [2] With the level of popularity Swift is at right now with her record-shattering Eras Tour concert and her mission to reclaim her masters, it’s no surprise that the photos managed to spill onto other platforms such as X (formerly known as Twitter) and have since been viewed millions of times. These photos were incredibly sexually explicit, with many going as far as to show graphic acts of sexual assault and rape occurring - sparking outrage worldwide as it showed that while women have to deal with very real threats of rape and sexual assault on a daily basis, they are not free from its harms on the internet either. Lax enforcement by both social media companies and law enforcement allows for this to happen, disproportionately affecting women and girls. [3]

The photos were so lewd and widespread that X had to block any searches related to the singer for around a week, as the images were the only thing popping up. OpenAI came out and said that the images of Swift were not generated using ChatGPT or any of their applications and that they have very secure protections in place to protect against such types of requests. [4] Fake pornography generated by software has been around for almost a decade, affecting many, from streamers to government officials to celebrities. [2] The lack of regulation surrounding AI leaves most victims with no legal pathway to receiving justice. Furthermore, even fewer of them have massive, devoted fan bases, such as the Swifties, who worked overtime to drown out the images and create a “Protect Taylor Swift” campaign. [2] Swifties and others worldwide called for stronger protections around deep fakes and AI images. [2] It eventually reached the White House, with Press Secretary Karine Jean-Pierre addressing the situation as “alarming” and calling for additional legislation on the matter going forward. [3] 

As a result, a group of Bipartisan U.S. lawmakers introduced the “No Artificial Intelligence Fake Replicas and Unauthorized Duplications Act” or the “No AI FRAUD Act,” aiming to create federal protections against AI abuse and to uphold each American’s First Amendment rights online. [5] Several lawmakers have said they hope it will give individuals a voice against AI-generated fakes and forgeries, protections that do not exist or are not robust enough currently. Deepfake pornography, such as the content that was generated of Swift, is often described as image-based sexual abuse - as AI continues to boom, it’s imperative that people can fight back against AI-generated impersonations of them in civil court and recover damages. The hope is that with this type of violation of privacy happening to someone as powerful as a billionaire like Swift, other women, children, and people who are far more vulnerable will be able to fight back as well. 

It’s easier said than done, though. A federal crackdown may not be able to solve the problem as laws that criminalize sexual deep fakes or other inappropriate uses of AI do not deal with a major issue: who should be charged with the crime? [6] It’s quite unlikely that the creators of these images will just come forward to reveal themselves and admit to their actions. Additionally, forensics cannot always prove what software created specific content, making it even harder to determine who or what is at fault. [6] Even if law enforcement can identify an image’s origin, there’s a good chance that they will run into the hurdle of Section 230, a piece of legislation that says websites are not responsible for what their users post. [7] Furthermore, all of this raises concerns about the First Amendment as well - regulations that are too broad could be said to infringe on not only the creator’s First Amendment rights but also the journalists and media outlets reporting on the deep fakes. [7]

As of right now, there isn’t a clear solution for the growing problem of AI-generated content. Policies and legislation should be adopted to promote forms of social responsibility when using AI, but it’s difficult to define the boundaries of responsibility, fault, and rights in a technologically advanced world that is rapidly growing and changing. The Biden Administration has proposed the idea of digital watermarks, which can flag AI-generated content as synthetic. [7] This wouldn’t necessarily eliminate deep fakes, but it might make it easier to slow the spread of harmful content and perhaps remove it. The true challenge is finding the right intersection of legal and technical precedent to solve the solution to this issue. Whether it’s Taylor Swift or a not-famous Swiftie like you and I, we all deserve access to a safe internet.

 

[2] Tiffany Hsu, Fake and Explicit Images of Taylor Swift Started on 4chan, Study Says, The New York Times (Feb. 5, 2024),https://www.nytimes.com/2024/02/05/business/media/taylor-swift-ai-fake-images.html

[3] Press Briefing by Press Secretary Karine Jean-Pierre, NSC Coordinator for Strategic Communications John Kirby, and National Climate Advisor Ali Zaidi | The White House (Jan. 26, 2024), https://www.whitehouse.gov/briefing-room/press-briefings/2024/01/26/press-briefing-by-press-secretary-karine-jean-pierre-nsc-coordinator-for-strategic-communications-john-kirby-and-national-climate-advisor-ali-zaidi/

[4] Kate Gibson, Fake and graphic images of Taylor Swift started with AI challenge, CBS News (Feb. 5, 2024),

[5] Leah Sarnoff, Taylor Swift and No AI Fraud Act: How Congress plans to fight

back against AI deepfakes, ABC News (Jan. 30, 2024),

[6] Brian Contreras, Tougher AI Policies Could Protect Taylor Swift—And Everyone Else—From Deepfakes, Scientific American (Feb. 8, 2024), 

[7] Department of Justice’s Review of Section 230 of the Communications Decency Act of 1996 | U.S. Department of Justice, 


137 views0 comments

Recent Posts

See All

Comments


bottom of page