US President Donald Trump displays a signed executive order at an AI summit on 23 July 2025 in Washington, DC Chip Somodevilla/Getty Images
President Donald Trump wants to ensure the US government only gives federal contracts to artificial intelligence developers whose systems are āfree from ideological biasā. But the new requirements could allow his administration to impose its own worldview on tech companiesā AI models ā and companies may face significant challenges and risks in trying to modify their models to comply.
āThe suggestion that government contracts should be structured to ensure AI systems are āobjectiveā and āfree from top-down ideological biasā prompts the question: objective according to whom?ā says at the Center for Democracy & Technology, a public policy non-profit in Washington DC.
Advertisement
The Trump White Houseās , released on 23 July, recommends updating federal guidelines āto ensure that the government only contracts with frontier large language model (LLM) developers who ensure that their systems are objective and free from top-down ideological biasā. Trump signed a related titled āPreventing Woke AI in the Federal Governmentā on the same day.
The AI action plan also recommends the US National Institute of Standards and Technology revise its AI risk management framework to āeliminate references to misinformation, Diversity, Equity, and Inclusion, and climate changeā. The Trump administration has already defunded research studying misinformation and shut down DEI initiatives, along with dismissing researchers working on the US National Climate Assessment report and cutting clean energy spending in a bill backed by the Republican-dominated Congress.
āAI systems cannot be considered āfree from top-down biasā if the government itself is imposing its worldview on developers and users of these systems,ā says Branum. āThese impossibly vague standards are ripe for abuse.ā
Free newsletter
Sign up to The Weekly
The best of Āé¶¹“«Ć½, including long-reads, culture, podcasts and news, each week.

Now AI developers holding or seeking federal contracts face the prospect of having to comply with the Trump administrationās push for AI models free from āideological biasā. Amazon, Google and Microsoft have held federal contracts supplying AI-powered and cloud computing services to various government agencies, whereas Meta has made its Ā available for use by US government agencies working on defence and national security applications.
In July 2025, the US Department of Defenseās Chief Digital and Artificial Office worth up to $200 million each to Anthropic, Google, OpenAI and Elon Muskās xAI. The inclusion of xAI was notable given Muskās recent role leading President Trumpās DOGE task force, which has fired thousands of government employees ā not to mention xAIās chatbot Grok recently making headlines for expressing racist and antisemitic views while describing itself as āMechaHitlerā. None of the companies provided responses when contacted by Āé¶¹“«Ć½, but a few referred to their executivesā general statements praising Trumpās AI action plan.
It could prove difficult in any case for tech companies to ensure their AI models always align with the Trump administrationās preferred worldview, says at Bocconi University in Italy. That is because large language models ā the models powering popular AI chatbots such as OpenAIās ChatGPT ā have certain tendencies or biases instilled in them by the swathes of internet data they were originally trained on.
Some popular AI chatbots from both US and Chinese developers demonstrate surprisingly similar views that align more with US liberal voter stances on many political issues ā such as gender pay equality and transgender women’s participation in womenās sports ā when used for writing assistance tasks, . It is unclear why this trend exists, but the team speculated it could be a consequence of training AI models to follow more general principles, such as incentivising truthfulness, fairness and kindness, rather than developers specifically aligning models with liberal stances.
AI developers can still āsteer the model to write very specific things about specific issuesā by refining AI responses to certain user prompts, but that wonāt comprehensively change a modelās default stance and implicit biases, says Rƶttger. This approach could also clash with general AI training goals, such as prioritising truthfulness, he says.
US tech companies could also potentially alienate many of their customers worldwide if they try to align their commercial AI models with the Trump administrationās worldview. āIām interested to see how this will pan out if the US now tries to impose a specific ideology on a model with a global userbase,ā says Rƶttger. āI think that could get very messy.ā
AI models could attempt to if their developers share more information publicly about each modelās biases, or build a collection of ādeliberately diverse models with differing ideological leaningsā, says at the University of Washington. But āas of today, creating a truly politically neutral AI model may be impossible given the inherently subjective nature of neutrality and the many human choices needed to build these systemsā, she says.
Topics:




