The search term “Taylor Swift AI” became trending earlier this week on X, which was formerly Twitter, after explicit AI-generated images of Taylor Swift began to propagate. A particular post, disseminated by a verified user on the platform, endured for approximately seventeen hours and garnered over forty-five million views prior to its removal and the suspension of the account.
Although Swift’s supporters promptly united to report any retweets or shares that featured the images, in addition to inundating search results with authentic photos and videos of the singer, duplicates of the fraudulent photographs continued to circulate online. Yahoo Entertainment has decided not to display any of the images or provide links to them.
The AI images do not pertain to a lack of media literacy or the inability of the general public to distinguish between reality and a deepfake, which is manipulated media that appears or sounds like someone else. The Swift example has a nearly exaggerated appearance.
A second major concern is the need for social platforms and legislation to clamp down on deepfake p**n generated by artificial intelligence.
Although X owner Elon Musk has reduced the size of X’s content moderation team since assuming control in 2022, the platform still enforces policies that prohibit nonconsensual nudity and synthetic or manipulated media. On January 26, Musk ostensibly addressed the issue without explicitly mentioning Swift in a post.
It stated, “Posting images of non-consensual nudity (NCN) is unequivocally forbidden on X, and we maintain a strict policy of zero tolerance towards such content.”
According to a report from 2023, 98 percent of all deepfake videos on the internet are p**nographic, and 99 percent of the targets are female.
Emily Poler, an intellectual property litigator based in Brooklyn, New York, explained that despite the fact that the image does not depict Taylor Swift herself, it contains sufficient characteristics and indicators to prove beyond a reasonable doubt that it is intended to be her.
Related: David Crowe, Taylor Swift’s Alleged Stalker Was Seen Nearly 30 Times Around Her NYC Home!
Poler informed Yahoo Entertainment, “Generally speaking, the laws only address the commercial exploitation of a person’s image or likeness.” Poler made allusion to a 1992 legal case wherein Samsung was accused of airing a commercial that featured a feminized robot positioned in front of a game board resembling Wheel of Fortune, as claimed by Wheel of Fortune host Vanna White.
“No, it is not Vanna White; however, a strapless evening gown and a blonde hairdo were affixed to it before it was placed before a board of letters. Poler stated, “We are all aware that [the robot] is intended to be Vanna White.”
A partner at Brown Rudnick, Jason Sobel, concurred and stated to Yahoo Entertainment that an exact replica of the individual is not necessary. Strobel has more than two decades of experience litigating cases involving copyright and intellectual property law.
Related: Crazy Taylor Swift Stalker Arrested Twice in Just 48 Hours, Shocking What Eyewitness Said!
“[Swift is] a public figure, she has the right to control the exploitation of her identity, and that includes her name, her likeness [and] voice,” according to Strobel. “It doesn’t need to be a perfect facsimile of a person to be to be misappropriating and violating someone’s exclusive right to exploit their own identity.”
While it is unknown whether the creator of the Swift AI photos intended to profit from the photos’ notoriety through product sales, Poler stated that Swift’s attorneys could investigate whether they did so.
Sobel stated that litigants seeking to establish whether X profited from traffic and liability based on the photo would need to present concrete evidence demonstrating that X intentionally delayed removing the policy-violating content in order to gain from the traffic.
“That would be difficult to prove,” he commented.
Swift and her team have not made a public statement regarding whether legal action will be taken against the deepfake photograph creator. There was no response from Swift’s publicist to Yahoo Entertainment’s request for comment.
A portion of the social media response to the AI Swift photos consisted of downplaying any legitimate concerns, arguing that the $1.1 billion-asset Swift is undoubtedly too successful and famous to care that a counterfeit explicit photo is circulating on X. It was frequently likened to the widespread dissemination of photographs from compromised celebrity smartphones in 2014, a scandal known as “the fappening” that sparked widespread attention.
Actress Scarlett Johansson fell prey to explicit videos generated by artificial intelligence in 2018. She stated in a statement at the time that she had “sadly been down this road many times” due to the proliferation of real or fake images of herself on the internet.
She wrote, “There is nothing that can prevent someone from pasting my or another person’s image onto a different body and making it appear as eerily realistic as they want.” “The internet is tantamount to an abyss that continues to be largely devoid of legal regulations,”
A potential silver lining is that this circumstance may prompt a shift in legislation regarding the protection of others from AI p**n. “There’s always some high-profile thing that that sparks legislative change,” Sobel commented.