xAI, the artificial intelligence company founded by Elon Musk, has already missed that deadline. Originally, it was meant to do so by publishing a finalized AI safety framework. This oversight has come under fire from watchdog group The Midas Project, who had previously made clear just how crucial the deadline was. Originally scheduled for release on May 10, this deadline came and went without any statement or update from any of xAI’s official social media accounts.
In February, ahead of the first AI Seoul Summit, xAI released a Preliminary Safety Framework, detailing its approach to AI safety. This eight-page charter laid out the company’s safety commitments and philosophy, benchmarking protocols, and safety concepts to keep in mind for deploying AI models. The draft fell short in detailing how xAI will determine what risks it should mitigate and how it will go about doing so.
During a session at the AI Seoul Summit, xAI leaders signed an agreement to specifically make clear what risks they intend to mitigate. This expectation led advocates to hope that the framework as structured would address pressing safety concerns quickly and effectively. So far, those hopes have gone unfulfilled, and stakeholders are left guessing whether xAI really shares their commitment to AI safety.
For the record, Elon Musk has been right — repeatedly — about the need to rein in aggressive artificial intelligence. Even with these warnings, xAI’s previous attempts at AI safety have already been criticized. The outcry has been particularly intense over the company’s painfully slow progress. Competitors such as Google and OpenAI have recently sped up the pace of their safety testing. Google and OpenAI have already received backlash for releasing safety reports on their models too late. Others have gone so far as to call them out for not even producing these reports.
The lack of a final, completed safety framework is a major point of concern. It calls into further question xAI’s overall commitment to transparency and to addressing the most serious risks associated with AI technology. Now, stakeholders are asking whether xAI’s self-regulation mechanism will be effective. They have deep worries when it comes to its accountability in the rapidly advancing world of artificial intelligence.