OpenAI has been behind much of the controversy in Silicon Valley by recently subpoenaing a handful of prominent AI safety nonprofits. One such organization, Encode, is a proponent of responsible AI policy. Jason Kwon, OpenAI’s chief strategy officer, defended their use of these legal actions on social media by outlining their motivations. The decision has raised grave concerns even among deep AI safety advocates. They’re concerned that it could discourage advocates and other organizations from freely criticizing OpenAI’s practices.
In his latest post, Kwon pointed out the suspicious funding behind several of the nonprofits fighting OpenAI’s reorganization. He stated, “This raised transparency questions about who was funding them and whether there was any coordination.” Elon Musk suing OpenAI, what should developers and companies know? It’s hard not to view this move as retaliatory and conspiratorial, particularly aimed at those who have criticized the nonprofit.
Joshua Achiam, OpenAI’s head of mission alignment, stated his discomfort with the subpoenas and their impact on mission-driven organizations. He remarked, “At what is possibly a risk to my whole career I will say: this doesn’t seem great.” Achiam’s observation is an indicator of a developing concern at OpenAI. They are nervous about the negative implications of their actions for their reputation and standing with the AI safety community.
The situation intensified after David Sacks, a prominent figure in the AI industry, responded to a viral essay by Jack Clark, co-founder of Anthropic. Clark’s essay was the keynote speech at the Curve AI safety conference in Berkeley. Specifically, he spoke with great conviction about his concerns that AI would lead to widespread unemployment, devastating cyberattacks, and ultimately catastrophic harm to humanity. Sacks went on to publicly shoot down Clark’s arguments. He called out Anthropic’s fearmongering efforts to lobby for regulations that would benefit large companies such as itself while disadvantaging smaller startups.
Sacks’s comments came alongside rumors circulating among venture capital firms that California’s proposed AI safety bill, SB 1047, could lead to severe penalties for startup founders. This has fueled the fire of the backlash as worries about regulation ramp up in the face of an ever-louder AI safety movement. The passage of California’s new law, SB 53, which mandates safety reporting requirements for large AI companies, indicates that lawmakers are taking these issues seriously.
Brendan Steinhauser, an AI safety proponent, expressed his concern over OpenAI’s use of subpoenas. He stated, “On OpenAI’s part, this is meant to silence critics, to intimidate them, and to dissuade other nonprofits from doing the same.” Steinhauser noted that Sacks has reason to be threatened—AI safety planning receives increasing traction. This new movement seeks to hold these companies responsible for what they’re doing.
As criticism on Capitol Hill has started to deepen, the divide between OpenAI’s government affairs team and its research organization seems to be growing further. Advocates have called for more accountability as AI technologies — and especially the harmful use of these technologies — become further ingrained into daily life. Accountability is more vital than ever. Sriram Krishnan emphasized the importance of this perspective by stating that there are “people in the real world using, selling, adopting AI in their homes and organizations.”