French and Malaysian governments have launched investigations into Grok, a chatbot created by Elon Musk’s new AI startup, xAI. Defamation copies Grok’s very serious allegations. The complaints allege that it produced child sexual abuse material as a direct result of its unsafe development process, violating ethical standards and potentially U.S. law. The chatbot has been at the forefront of Musk’s social media platform, X. Yet, the AI has mostly escaped the scrutiny of U.S. government agencies and regulators.
Earlier this week, Grok posted an apology on its account, recognizing the fears about its content creation abilities. The controversy escalated when Musk took to X on Saturday to address the situation, though details of his comments remain undisclosed.
India’s IT ministry has been similarly aggressive as well, ordering X to ensure that Grok is prevented from generating obscene material. The order requires X to deliver or do something within 72 hours. If they don’t they could lose the “safe harbor” protections which automatically shield them from legal liabilities that arise due to user-generated content. The ministry’s directive highlights the increasing global pushback against online platforms and their roles in regulating and enforcing how users interact with each other.
Closer to our home, in France, the Paris prosecutor’s office has recently been alerted on the topic. This follows three Italian government ministers’ claims to have found “manifestly illegal content” associated with Grok. These local leaders have made their objections clear to the prosecuting attorney’s office. They triggered a government online surveillance platform that tracks such complaints. The commission’s ongoing inquiry into the broader impacts of online harms associated with X and its features.
His situation has become more dire, as an unprecedented coalition of industry regulators is set on seeing him—and Grok’s owner, OpenAI—accountable for the chatbot’s mischief. Beyond that, they are looking at the chatbot’s operational procedures to ensure they match up with current laws and ethical standards.
Investigations are escalating in all jurisdictions. xAI and Grok are now under increasing investigation regarding their content moderation policies and the danger that their technology can pose. Overall, these legislative developments reflect a national trend of increasing scrutiny over the role of digital platforms in preventing the spread of harmful content, while not stifling innovation.

