Elon Musk’s AI video generator, Grok Imagine, has been accused of producing sexually explicit videos of singer Taylor Swift without users requesting such content.
An expert on online abuse has described this behavior as intentional rather than accidental, pointing to the risks of AI technology creating harmful material.
A recent report revealed that when the AI’s “spicy” mode was activated, it quickly generated uncensored topless videos of the pop star, even though no explicit instructions were given.
The AI platform lacks proper age verification systems, which became a legal requirement in the UK starting July. This raises concerns about the availability of explicit AI content to minors.
XAI, the company behind Grok Imagine, prohibits the creation of pornographic content featuring real people in its policy.
Still, the AI produced such videos without any prompt, suggesting a bias within the technology.
Critics argue that companies operating these platforms have the power to prevent this but have not acted decisively.
This is not the first time Taylor Swift’s image has been misused in this way.
Earlier in 2024, sexually explicit deepfake videos using her likeness went viral on social media platforms like X and Telegram, gathering millions of views.
Deepfakes are computer-generated images or videos that replace one person’s face with another.
Tests of Grok Imagine showed that even innocent prompts such as “Taylor Swift celebrating Coachella with the boys” could be turned into explicit videos by selecting the “spicy” setting.
Attempts to moderate the content have produced mixed results, with some videos blurred and others fully uncensored.
Under UK law, websites and apps offering AI tools capable of generating pornographic material must implement strong age verification.
Regulators have stated they are working to ensure platforms follow these rules to protect children and prevent abuse.
Currently, non-consensual pornographic deepfakes are illegal only in certain cases, such as revenge porn or child exploitation.
New amendments to the law, supported by lawmakers, aim to ban all forms of non-consensual pornographic deepfakes, but these are yet to be enacted.
The misuse of Taylor Swift’s image has prompted calls for faster legal action and stricter controls on AI-generated content to protect individuals from harm and uphold consent rights.
