After weeks of pressure from both advocacy groups and governments, Elon Musk’s X says it’s finally going to do something about its deepfake porn problem. Unfortunately, after testing following the announcement, some are still holding their breath.
When did the X deepfake porn controversy begin?
The controversy started earlier this January, after the social media site added a feature allowing X users to tag Grok in their posts and prompt the AI to instantly edit any image or video posted to the site, all without the original poster’s permission. The feature seemingly came with few guardrails, and according to reporting done by AI authentication company Copyleaks, as well as statements victims have given to sites like Metro, posters on X quickly started using it to generate explicit or intimate images of real people, particularly women. In some cases, child sexual abuse material was also reportedly generated.
It’s pretty upsetting stuff, and I wouldn’t advise you to go looking for it. While the initial trend seemed to focus on AI photos of celebrities in bikinis, posters quickly moved on to manipulated images of regular people where they appeared to be pregnant, skirtless, or in some other kind of sexualized situation. While Grok was technically able to generate such imagery from uploaded photos before, the ease of access to it appeared to open the floodgates. In response to the brewing controversy, Musk had Grok generate a photo of himself in a bikini. However, the jokes ceased after regulators got involved.
Governments are starting to investigate
Earlier this week, the UK launched investigations into Grok’s alleged deepfake porn, to determine whether it violated laws against nonconsensual intimate images as well as child sexual abuse material. Malaysia and Indonesia went a step further, actually blocking Grok access in the country. Yesterday, California began its own investigations, with Attorney General Rob Banta saying “I urge XAI to take immediate action to ensure this goes no further.”
X is implementing blocks
In response to the pressure, X cut off the ability to tag Grok for edits on its social media site for everyone except subscribers. However, the Grok app, website, and in-X chatbot (accessible via the sidebar on the desktop version of the site) still remained open to everyone, allowing the flood of deepfaked AI photos to continue (said photos would also still pose the same problems even if generated solely by subscribers, although X later said the goal was to stem the tide and make it easier to hold users generating illegal imagery accountable). The Telegraph reported on Tuesday that X also started blocking tagged Grok requests to generate images of women in sexualized scenarios, but that such images of men were still allowed. Additionally, testing by both U.S. and U.K. writers from The Verge showed that the banned requests could still be made to Grok’s website or app directly.
Musk has taken a more serious tone in more recent comments on the issue, denying the presence of child sexual abuse material on the site, although various replies to his posts expressed disbelief and claimed to show proof to the contrary. Scroll at your own discretion.
To finally put the controversy to bed, X said on Wednesday that it would now be blocking all requests to the Grok account for images of any real people in revealing clothing, regardless of gender and whether coming from paid subscribers or not. But for anyone hoping that would mark the end of this, there appears to be some fine print.
Specifically, while the statement said that it would be adding these guardrails to all users tagging the Grok account on X, the standalone Grok website and app are not mentioned. The statement does say it will also block creation of such images on “Grok in X,” referring to the in-X version of the chatbot, but even then, it’s not a total block. Instead, the imagery will be “geoblocked,” meaning it will only be applied “in those jurisdictions where it’s illegal.”
X’s post also says that similar requests made by tagging the Grok account will also be geoblocked, although because the section before this says that the Grok account won’t accept such requests from any user, that appears to be a moot point.
It’s important to note that, while the majority of the criticism lobbed at X during this debacle does not accuse the site of generating fully nude imagery, locations like the UK ban nonconsenual explicit imagery regardless of whether it is fully nude or not.
What do you think so far?
Some users can still generate sexualized deepfakes
It’s the biggest crackdown X has made on these images yet, but for now, it still appears to have holes. According to further testing by The Verge, the site’s reporters were still able to generate revealing deepfakes even after Wednesday’s announcement, by using the Grok app not mentioned in the update. When I attempted this using a photo of myself, both the Grok app and standalone Grok website gave me full-body deepfaked images of myself in revealing clothing not present in the original shot. I was also able to generate these images using the in-X Grok chatbot, and some images changed my posing to be more provocative (which I did not prompt), too.
As such, the battle is likely to continue. It’s unclear whether ignoring the Grok app or website is an oversight, or if X is only seeking to block its most visible holes. One would hope the former, given that X said that it has “zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content.”
It is worth noting that I am located in New York State, which might not be part of the geoblock, although we do have a law against explicit nonconsensual deepfakes.
I’ve reached out to X for clarification on the issue and will update this post when I hear back. However, when NBC News reached out with similar questions, the outlet was only told “Legacy Media Lies.” I can’t make any promises as to how the site will reply to my own requests.
In the meantime, while governments continue their investigations, others are calling for more immediate action from app stores. A letter sent from U.S. Senators Ron Wyden, Ben Ray Lujan, and Ed Markey to Apple CEO Tim Cook and Google CEO Sundar Pichai argues that Musk’s app now clearly violates both App Store and Google Play policies, and calls on the tech leaders to “remove these apps from the [Apple and Google] app stores until X’s policy violations are addressed.”