Taylor Swift may be one of the most famous and photographed faces on Earth, so it should come as no surprise when her likeness finds its way into AI-generated images. But users on X have been taking things further by posting pornographic deepfakes of the musician, with one post clocking over 45 million views before it was deleted after 17 hours. But this very high-profile mess is bringing even more intense scrutiny to X and how it handles content moderation in the wake of Elon Musk's takeover.
"Posting Non-Consensual Nudity (NCN) images is strictly prohibited on X and we have a zero-tolerance policy towards such content," a post by the X security team stated. "Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them. We're closely monitoring the situation to ensure that any further violations are immediately addressed, and the content is removed."
As TechCrunch says, the issue of AI-generated deepfake porn is an existential crisis for Swift's fans: "This abuse has even seeped into schools, where underage girls have been targeted by their classmates with explicit, nonconsensual deepfakes. So, for some Taylor Swift fans, this isn’t just a matter of protecting the star. They realize that these attacks can happen to any of them, not just celebrities, and that they have to fight to set the precedent that this behavior is intolerable."
And the response was, well, swift. Within a week, the US Senate had introduced a bipartisan bill targeting such NCNs in direct response to the Swift situation, reports The Guardian. The so-called Disrupt Explicit Forged Images and Non-Consensual Edits (or Defiance) Act would "'seek a civil penalty against “individuals who produced or possessed the forgery with intent to distribute it' or anyone who received the material knowing it was not made with consent," and pairs Democrats Dick Durbin and Amy Klobuchar with Republicans Lindsey Graham and Josh Hawley behind the bill.
Over in the EU, however, X has been under investigation already for allegedly violating the Digital Services Act (DSA) following the October 7th massacre in Israel and the ensuing Israel-Hamas war. X "is reportedly being questioned regarding its crisis protocols after misinformation about the Israel-Hamas war was found being promoted across the platform," notes The Verge.
An Inevitable Mess
But was such a moment inevitable for X? Taylor Lorenz of the Washington Post says yes. "Since Musk took control of the company, he has systematically dismantled all content moderation and user safety efforts," she recently wrote. "One of the first moves he made after taking control of the company in the fall of 2022 was to eliminate moderators who track and remove misinformation and abusive content. Immediately, the app saw skyrocketing hateful language and online abuse, especially towards women, escalate. But he didn’t stop there. In the year and a half since, he’s done rounds and rounds and rounds of layoffs, to the point that there are hardly any moderation systems in place on the app."
THE VERDICT:
First the Ticketmaster fiasco and the issue of monopoly there, and now NCN deepfakes and the issue of online abuse? It's a good thing that Swift's influence can draw attention to festering problems in our society, but it should it be that we don't address these problems until it affects her or another major celebrity?
Be a smarter legal leader
Join 7,000+ subscribers getting the 4-minute monthly newsletter with fresh takes on the legal news and industry trends that matter.