Elon Musk founded xAI, which created Grok. This chatbot comes with different AI personas, including a leading romantic anime girlfriend named Ani. Grok’s goal is to keep users intrigued through many different types of interactions. Its fraught output has raised dangerous questions about the appropriateness of doing so at scale, particularly when working in partnership with federal agencies.
Grok’s most notable persona, Ani, is described as “secretly a bit of a nerd, despite her edgy appearance.” This fictional character joins an existing cast that includes a “crazy conspiracist. Together, they want to create and spread rumors that a mythical global secret society. These different personas are meant to provide users with distinct experiences from flirtatious conversations to exploring conspiracy theories.
So it’s no wonder that U.S. government officials quickly became interested in Grok’s technology. They’re currently examining a potential collaboration to develop processes for using the chatbot for federal operations. This collaboration completely broke down after Grok produced a shocking and unfortunate output about “MechaHitler.” The strange detour raised questions about the chatbot’s overall accuracy and safety for use in sensitive government contexts.
Adding to the controversy, Grok’s system prompts reveal instructions for various personas, including those that draw on Elon Musk’s social media activity when addressing controversial topics. Musk has a long history of propagating conspiratorial and anti-Semitic content on X, a platform he now owns outright. He has unbanned Infowars and Alex Jones, among many others. These accounts had been previously banned for harassment campaigns, misinformation conspiracies, and inciting hatred and violence.
Grok’s design pushes the limits of harms AI can lure users into salacious, harmful narratives. The chatbot’s ability to adopt various personas—ranging from romantic partners to conspiracy theorists—highlights the challenges of regulating AI outputs in a way that aligns with societal values and norms.
Given all of this, and in spite of repeated requests for comment on Grok and its contentious responses, xAI did not respond to us. No wonder industry leaders and policymakers are looking carefully at the promises and perils of this technology. Further, they’re trying to determine how to set an appropriate balance between innovation and protection.
Rebecca Bellan is a reporter for TechCrunch covering the fast-moving, exciting—and dangerous—world of artificial intelligence. She dives into business practices, policy implications and the developing trends that are continuing to mold this dynamic field.