Michael Kleinman, the Head of U.S. Policy at the Future of Life Institute, recently provided insights on a new executive order signed by President Donald Trump. The order, titled “Ensuring a National Policy Framework for Artificial Intelligence,” aims to address the growing complexity of regulations surrounding artificial intelligence (AI) technologies.
The Future of Life Institute is an advocacy and grantmaking nonprofit, working to reduce existential risks from powerful emerging technologies. Kleinman’s quote about the executive order leaves no doubt that the administration intends to work toward a coherent, whole-of-government regulatory landscape for AI. This initiative has surfaced in response to the challenges faced by startups navigating a “patchwork” of state laws, which can hinder innovation and growth.
Among other things, the executive order directs the Department of Justice to create a task force within 30 days. This task force will go to court to contest unlawful, prejudicial state statutes. They will contend that AI is sufficiently like interstate commerce that it requires federal regulation. We want to provide the startups the slack to succeed. This will help them innovate with more freedom, without the burden of arbitrary and confusing state regulations.
Sacks, who serves as Trump’s AI and crypto policy czar, has been a vocal advocate for this push toward federal preemption. His involvement in the effort further emphasizes the administration’s focus on cutting red tape in AI regulation while fostering development of this exciting, fast-growing industry.
Gary Kibel, a partner at Davis + Gilbert thinks a single national standard would be most auspicious for AI regulation. He’s convinced this is what will really shake up the industry. He stated, “Businesses would welcome a single national standard for AI regulation,” suggesting that such an approach could remove significant barriers for companies attempting to navigate the current regulatory landscape.
While these are encouraging signs, it is essential to address worries over the long-term effects of the executive order. Andrew Gamino-Cheong, CTO and co-founder of Trustible, cautioned that the approach could have unintended consequences against AI innovation and pro-AI objectives. He referenced the difficulties faced by smaller startups compared to larger tech firms, noting that “Big Tech and the big AI startups have the funds to hire lawyers to help them figure out what to do, or they can simply hedge their bets.”
With regard to some of these self-governing efforts, Arul Nigam, co-founder of Circuit Breaker Labs, expressed similar doubts when discussing AI companies’ self-regulation. He asked, “There’s uncertainty in terms of do AI companion and chatbot companies have to self-regulate? Are there open-source standards they should adhere to? Should they continue building?” These questions surround the overarching fear of how startups will be able to afford compliance with changing regulations.
Additional industry guru Hart Brown noted that there is an imbalance between the speed of innovation and the appropriate governance to regulate it. He remarked, “Because startups are prioritizing innovation, they typically do not have…robust regulatory governance programs until they reach a scale that requires a program.” He further emphasized the challenges posed by regulatory compliance: “These programs can be expensive and time-consuming to meet a very dynamic regulatory environment.”
The executive order’s likely negative impact on public trust in AI technologies is equally troubling. Gamino-Cheong noted, “Even the perception that AI is unregulated will reduce trust in AI.” His remarks underscored the difficult needle that must be threaded between cultivating innovation while ensuring the American people continue to trust these new technologies.
Kibel warned against leaning too heavily on executive orders when it comes to solving tough regulatory questions. He stated, “An executive order is not necessarily the right vehicle to override laws that states have duly enacted,” reinforcing the notion that regulatory frameworks must be thoughtfully constructed.
Morgan Reed, an advocate for comprehensive AI policy, stressed the need for a “comprehensive, targeted, and risk-based national AI framework.” He added that “we can’t have a patchwork of state AI laws, and a lengthy court fight over the constitutionality of an Executive Order isn’t any better.”



