Silicon Valley is electrified with rising polarization. The goal is to find the sweet spot between the fast progression of artificial intelligence (AI) technology and responsible development. During last week’s Bloomberg New Economy Forum, OpenAI’s new chief strategy officer, Jason Kwon, had a significant revelation. Yet the company recently began serving AI safety nonprofits like Encode with subpoenas. This move has sparked significant debate about the motivations behind these actions and the broader implications for the AI safety movement.
At the one Curve AI safety conference held in Berkeley earlier this month, those concerns were most vividly articulated by Joshua Achiam. Most recently, he admonished OpenAI’s decision to sue nonprofits working toward AI safety. Achiam remarked, “At what is possibly a risk to my whole career I will say: this doesn’t seem great,” highlighting the internal conflict within OpenAI regarding its approach to critics.
This week, tech titan David Sacks appeared to take aim at that powerful argument with his response to a viral essay by Anthropic cofounder Jack Clark. Things only escalated from there in short order. In 1997, Clark’s essay called to attention some very real dangers that AI could bring, from disrupting job markets to launching cyberattacks. Sacks responded by calling out Clark’s point of view. He accused Anthropic of fearmongering to advocate for legislation that serves its own interests, while hurting smaller startups in the mix.
“Anthropic is running a sophisticated regulatory capture strategy based on fear-mongering. It is principally responsible for the state regulatory frenzy that is damaging the startup ecosystem.” – David Sacks
The controversy underscores a growing rift within Silicon Valley, where the imperative to build AI responsibly clashes with the competitive drive to develop consumer-focused products. The tension highlights diverging philosophies between the big tech actors in the AI ecosystem. Anthropic is unique among the major AI labs in supporting California’s Senate Bill 53 (SB 53). This bill would require advanced safety reporting requirements for AI firms of considerable size and influence.
As the situation unfolds, Kwon indicated that OpenAI’s motivations for targeting nonprofits stem from suspicions about their funding and potential coordination against the company. He stated, “This raised transparency questions about who was funding them and whether there was any coordination.” OpenAI believes its critics are all part of the Musk-led conspiracy. This conspiracy involves some heavy hitters too, like Elon Musk.
Brendan Steinhauser, CEO of the AI safety nonprofit Alliance for Secure AI, denounced OpenAI’s dastardly practices. He contended that this was the purpose behind the subpoenas, to frighten their critics, and to dissuade other public interest organizations from pushing for implementing safety measures. He noted, “On OpenAI’s part, this is meant to silence critics, to intimidate them, and to dissuade other nonprofits from doing the same.”
A rift has developed inside OpenAI. Reports describe a deepening divide between its government affairs team and its social science research organization. Internal discord raises new questions about the company’s long-term strategic focus. Yet it’s up against external pressures from safety advocates and competitive market forces.
The conversation around AI safety is growing increasingly urgent. Venture capital firms like to speculate that California’s SB 1047 would subject startup founders to draconian punishments. These stories foster a chilling environment of fear and apprehension among the entrepreneurial ecosystem. They further muddy the developing state of AI regulation.
Shriram Krishnan, the White House’s senior policy advisor on AI, has publicly attacked AI safety advocates. He thinks they’re missing the boat on what innovation and economic development really means. His comments illustrate a narrowing view permeating much of deep pocketed industry leaders about the purpose and impact of the safety movement.
If the AI safety movement seems to be building momentum and accelerating, that’s because it is. The increasing pushback from Silicon Valley against safety-focused groups may indicate that these organizations are making significant strides in raising awareness about potential risks associated with AI technologies.