Join the hundreds of other experts, former officials, and public figures that have come together to sign the Pro-Human Declaration. This laser‐focused, audacious document demands that we develop artificial intelligence (AI) responsibly. This important declaration was just enough under the wire that it preceded the recent Pentagon-Anthropic smackdown. It underscores the need for real, proven, open, and ongoing interaction on the future of AI today.
Consider, for example, the strong support for the declaration from none other than Steve Bannon, President Trump’s former chief strategist. So too, for that matter, does Susan Rice—former U.S. National Security Advisor under President Obama. Their continued involvement shows the initiative’s wide appeal across highly disparate political landscapes.
The Pro-Human Declaration outlines a framework for AI development structured around five key pillars: keeping humans in charge, avoiding the concentration of power, protecting the human experience, preserving individual liberty, and holding AI companies legally accountable. This framework aims to advance a balanced, non-discriminatory approach to AI, one that promotes human oversight and ethical considerations.
Polling data shows a remarkable level of agreement by the American public on AI regulation. Max Tegmark, a key figure associated with the declaration, noted that “polling suddenly [is showing] that 95% of all Americans oppose an unregulated race to superintelligence.” This staggering statistic serves to reemphasize the need to set guidelines for the development of AI technology before these risks come to fruition.
Our Pro-Human Declaration is a direct response to “the race to replace.” It cautions that humans will be displaced initially in the labor market, and later as decision-makers. As the signatories explain, such a trajectory would wreak havoc on individual autonomy and social fabric. They stress that humans need to stay in control of AI systems to avoid these negative results.
Military leaders, such as Mike Mullen, then Chairman of the Joint Chiefs of Staff, supported the declaration. Having this kind of support gave tremendous muscle to their cause. Progressive faith leaders are becoming key players in the conversation about responsible AI development. Their participation demonstrates the wide coalition that exists in favor of a thoughtful approach to technological change.
That urgency around the Pro-Human Declaration happened to align with a week that laid bare the catastrophic risks posed by ungoverned AI systems. Those discussions are intensifying in Congress to push for establishing the appropriate regulatory frameworks. Most advocates see this document as a succinct yet unmistakable answer to the Congressional inaction on the same issue.
Tegmark elaborated on the broader implications of unregulated AI, stating, “If some creepy old man is texting an 11-year-old pretending to be a young girl and trying to persuade this boy to commit suicide, the guy can go to jail for that.” He further questioned the ethical differences between human and machine behavior: “We already have laws. It’s illegal. So why is it different if a machine does it?”
He drew parallels between drug regulation and AI safety, asserting, “You never have to worry that some drug company is going to release some other drug that causes massive harm before people have figured out how to make it safe, because the FDA won’t allow them to release anything until it’s safe enough.” This comparison, though exaggerated, serves to underscore the need for strong regulatory safeguards in the AI space.

