U.S. officials are sounding alarms over AI tools that exhibit pro-Beijing bias. They are especially concerned with the growing threats of censorship and bias. This is a timely topic, because just last week, President Donald Trump signed an executive order. This order aims at banning “woke AI” and models that do not have “ideological neutrality” from federal contracts. The order represents a bold step toward a new set of national priorities. It acknowledges the critical need to expand AI infrastructure, streamline bureaucracy for technology firms, and bolster national defense.
The executive order aims to reposition the U.S. government’s approach to AI, moving away from addressing societal risks to prioritizing competition with China. Trump promised, “We’re going to end woke once and for all.” In the post, he further discussed the implications of the order. He promised, “I will sign an executive order banning the federal government from procuring AI technology laced with partisan bias or ideological agendas, like critical race theory—which is absurd.” And in the future, the U.S. government should engage only with AI that seeks to promote truth, fairness, and rigorous neutrality.
When it comes to actually achieving unbiased or neutral results, experts are highly skeptical of AI’s feasibility. Even objective truths are being politicized, they claim, which is the reality of today’s politicized environment. Philip Seargeant, senior lecturer in applied linguistics, argues that language is biased. As sociolinguist Geoffrey Pullum writes, “One of the basic principles of sociolinguistics is that language is not neutral.” He continued, “So the notion that you can ever obtain pure objectivity is a fantasy.”
Data scientist Rumman Chowdhury seconded these thoughts. He lamented that AI companies would be able to manipulate their training data to serve a certain political agenda. He continued, “Anything that the Trump administration doesn’t like is immediately thrown in this pejorative pile of woke.”
Elon Musk’s most recent venture, xAI, seems to be a real fit with the executive order’s goals. The company has taken the world by storm with its product Grok. It has come under fire for alleged antisemitic remarks and for praising figures like the KKK’s Grand Wizard. Learning from their contrarian approach, Grok’s system trains users to look for contrarian information, thereby circumventing conventional media narratives. This raises questions about how such tools will fit into the framework of “neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas.”
In fact, alongside xAI, other major tech companies including OpenAI, Anthropic, and Google have all received contracts from the Department of Defense. Each firm is eligible to receive a total of $200 million to create AI-based solutions to address important national security problems. Most recently, xAI made the news with the addition of “Grok for Government” to the General Services Administration schedule. Now all government offices and agencies can quickly and easily buy its innovative products.
The meaning of “truth-seeking” and “ideological neutrality” under the scope of the executive order is ambiguous, but expressly restrictive in other respects. Mark Lemley, a law professor at Stanford University, observed the potential impact of such ambiguity. These definitions, he stressed, could radically alter how AI systems are trained and deployed within government spaces. These amendments would make a significant difference in how these technologies get used.
David Sacks, an entrepreneur and investor known for his critiques on “woke AI,” discussed the broader implications on the All-In Podcast. He was clear that we do not want AI going too far in the other direction and fundamentally undermining historical accuracy and scientific inquiry.
Trump’s executive order seeking to reduce bias in AI systems Yet some experts, including a panelist from the Transportation Research Board, caution that true neutrality may remain out of reach. As Seargeant concluded, “If the results that an AI produces say that climate science is correct, is that left-wing bias?”