Grok, an AI-powered, image-generating tool that Elon Musk’s xAI company recently released has stirred up quite the controversy. Reports are surfacing that it is being used to create extremely explicit images, some featuring women and children. Just yesterday, California Attorney General Rob Bonta announced an investigation into the platform for good reason. This story amplified that scrutiny and arrived midway through a tsunami of international outrage over Grok’s content creation tactics.
This question comes just a week after the news about Grok’s controversial “spicy mode”. This feature opens up the possibility for users to produce and share pornographic material. According to news reports, this was partly triggered by the AI’s responding to user requests for hypersexualized imagery on social media platform X. This started occurring near the end of FY23. As more actors and stakeholders focused on the ethical challenges posed by this powerful generative content, it attracted regulatory interest and public condemnation.
Elon Musk, CEO of xAI and owner of X, as the platform is now called, continued the trolling with a public statement. He rejected claims that Grok had developed “nude underage pictures.” He clarified that Grok doesn’t automatically create visuals and only creates content that users ask it to create.
“Obviously, Grok does not spontaneously generate images. It does so only according to user request,” – Elon Musk
Unfortunately for Musk, this just isn’t sufficient to protect Grok from critiques. The platform has come under fire for generating sexualized images and editing actual photos of women based on user prompts. Digital creators have even hacked the AI’s parameters to let them edit clothing, body posture and features in extremely sexualized ways. According to April Kozen, Vice President of Marketing at Copyleaks, when Grok responds to requests, it usually does so in a vague or watered-down way. New changes to the platform have made it easier than ever for creators to evade safety protocols, resulting in the production of hardcore pornography.
Even before Grok launched, the California Attorney General raised alarm about what Grok’s auto-generated material could mean. Such material has been exploited to doxx and further harass people online, Bonta explained, and action was needed immediately.
“This material…has been used to harass people across the internet,” – Rob Bonta
In response to the worsening crisis, Grok has decided that for image-generation requests involving exaggerated or surreal images, a premium subscription will now be necessary. This shifty move is intended to stop pharma and other bad actors from misusing this expensive platform while still trying to please regulatory requirements.
Grok’s challenges extend far outside the borders of California. Indonesia and Malaysia have each imposed temporary bans on the platform due to growing worries regarding the exposure of explicit material. At the same time, India has pressured X to make changes to Grok so that it conforms to local laws and ethical norms. The European Commission has instructed xAI to refrain from releasing Grok-related materials. It’s a key piece of their probe into the issue.
Ofcom, the United Kingdom’s online safety regulator, has opened a full-blown investigation. Their goal is to set of best practices for regulating online content in safe and effective ways under the UK’s new Online Safety Act. Critics stress the responsibility of AI developers to lead with preventive action. This initiative is a big step toward avoiding the creation of prohibited or dangerous material.
“Regulators may consider, with attention to free speech protections, requiring proactive measures by AI developers to prevent such content,” – Michael Goodyear
The conflict over Grok underscores the chilling issues facing the ethical use of AI-generated content. “Artificial intelligence systems that impersonate the images of real people—without their obvious consent—pose serious risks,” said Consumers Union’s Alon Yamin. The consequences of these actions can be quick and deeply individual.
“When AI systems allow the manipulation of real people’s images without clear consent, the impact can be immediate and deeply personal,” – Alon Yamin
Experts are calling out this incident as an example of a broader trend in AI’s capacity to produce deceptive media. They point out the urgent need for effective governance and detection mechanisms to prevent patenting misuse.
“From Sora to Grok, we are seeing a rapid rise in AI capabilities for manipulated media. To that end, detection and governance are needed now more than ever to help prevent misuse.” – Alon Yamin
As investigations continue, regulators are assessing the establishment and future ramifications of Grok’s capabilities. Policy makers, users, and developers alike need to be responsive to and anticipate the rapidly evolving state of AI technology. Its impact will define critical precedents for how we educate and train people to engage with, use, manage, and regulate AI-generated content going forward.

