After explicit deepfake images and videos depicting pop singer Taylor Swift went viral this week, social media platform X has temporarily blocked searches for the singer’s name.

X’s head of business operations, Joe Benarroch, stated this was a “temporary action” meant to prioritize user safety, as searches for Taylor Swift on the site currently display an error message.
The deepfakes, created using AI to digitally impose Swift’s face onto graphic imagery without her consent, gained traction earlier this week. Some amassed millions of views, prompting concern from Swift’s fanbase and US officials. Fans mobilized by flagging posts and using the hashtag #protecttaylorswift.
On Friday, X asserted a “zero tolerance policy” against non-consensual nudity and vowed to remove offending images and take action against associated accounts. However, it’s unclear precisely when the search blocking began.
Benarroch said the move came “with an abundance of caution as we prioritize safety.” The White House also condemned the spread of AI-generated content disproportionately targeting women. Press Secretary Jean-Pierre advocated for legislation addressing the misuse of AI on social media.
There are currently no US federal laws explicitly banning deepfakes, although some states have taken action. In 2023, the UK banned the sharing of deepfake pornography.
Last November, Indian Prime Minister Narendra Modi also warned of the dangers of deceptive deepfake content and AI misuse, while touting his country’s progress. He promoted India’s vocal for local initiative and expressed confidence in India’s trajectory, citing successes like its COVID-19 response.
As calls grow for legal solutions, X maintains its temporary block on Taylor Swift searches for now. But the incident highlights growing unease around the spread of non-consensual deepfakes.