Social media platform X has blocked searches for Taylor Swift after explicit AI-generated images of the singer began circulating on the site.
In a statement to the BBC, X's head of business operations Joe Benarroch said it was a "temporary action" to prioritise safety.
When searching for Swift on the site, a message appears that says: "Something went wrong. Try reloading."
Fake graphic images of the singer appeared on the site earlier this week.
Some went viral and were viewed millions of times, prompting alarm from US officials and fans of the singer.
Posts and accounts sharing the fake images were flagged by her fans, who populated the platform with real images and videos of her, using the words "protect Taylor Swift".
The photos prompted X, formerly Twitter, to release a statement on Friday, saying that posting non-consensual nudity on the platform is "strictly prohibited".
"We have a zero-tolerance policy towards such content," the statement said. "Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them."
It is unclear when X began blocking searches for Swift on the site, or whether the site has blocked searches for other public figures or terms in the past.
In his email to the BBC, Mr Benarroch said the action is done "with an abundance of caution as we prioritise safety on this issue".
The issue caught the attention of the White House, who on Friday called the spread of the AI-generated photos "alarming".
"We know that lax enforcement disproportionately impacts women and they also impact girls, sadly, who are the overwhelming targets," said White House press secretary Karine Jean-Pierre during a briefing.
She added that there should be legislation to tackle the misuse of AI technology on social media, and that platforms should also take their own steps to ban such content on their sites.
"We believe they have an important role to play in enforcing their own rules to prevent the spread of misinformation and non-consensual, intimate imagery of real people," Ms Jean-Pierre said.
US politicians have also called for new laws to criminalise the creation of deepfake images.
Deepfakes use artificial intelligence to make a video of someone by manipulating their face or body. A study in 2023 found that there has been a 550% rise in the creation of doctored images since 2019, fuelled by the emergence of AI.
There are currently no federal laws against the sharing or creation of deepfake images, though there have been moves at state level to tackle the issue.
In the UK, the sharing of deepfake pornography became illegal as part of its Online Safety Act in 2023.
Latest Stories
-
Ghana-Russia Centre to run Russian language courses in Ghana
5 hours -
The Hidden Costs of Hunger: How food insecurity undermines mental and physical health in the U.S.
5 hours -
18plus4NDC marks 3rd anniversary with victory celebration in Accra
8 hours -
CREMA workshop highlights collaborative efforts to sustain Akata Lagoon
8 hours -
2024/25 Ghana League: Heart of Lions remain top with win over Basake Holy Stars
9 hours -
Black Queens: Nora Hauptle shares cryptic WAFCON preparation message amid future uncertainty
9 hours -
Re-declaration of parliamentary results affront to our democracy – Joyce Bawah
10 hours -
GPL 2024/25: Vision FC score late to deny Young Apostles third home win
10 hours -
Enhancing community initiatives for coastal resilience: Insights from Keta Lagoon Complex Ramsar Site Workshop
10 hours -
Family Health University College earns a Presidential Charter
10 hours -
GPL 2024/25: Bibiani GoldStars beat Nsoatreman to keep title race alive
10 hours -
GPL 2024/25 Bechem United keep title hopes alive with narrow win over FC Samartex
10 hours -
2024/25: Dauda Saaka scores as Asante Kotoko beat Dreams FC
10 hours -
M.anifest reflects on galamsey’s devastation 11 years after ‘No Shortcut to Heaven’
11 hours -
We’ll have the last laugh – Sammy Gyamfi slams EC’s “cantata” re-collation
11 hours