President Donald Trump signed an executive order on December 12, 2025, titled "Ensuring a National Policy Framework for Artificial Intelligence," establishing federal authority over AI regulation and directing agencies to challenge state laws deemed obstructive to innovation [Whitehouse.gov].
The order creates what the administration calls a "minimally burdensome" framework for AI regulation, targeting the patchwork of state-level AI laws that have emerged across the country. According to the executive order, state-by-state regulation "creates a patchwork of 50 different regulatory regimes that makes compliance more challenging, particularly for start-ups" [Whitehouse.gov].
While the order doesn't automatically invalidate existing state AI laws, it directs federal regulators including the Federal Trade Commission and Federal Communications Commission to develop standards that would preempt conflicting state regulations. The FTC has 90 days to issue a policy statement on how federal trade laws apply to AI models and preempt certain state requirements [Bassberry.com].
The move comes as comprehensive federal AI legislation faces delays. Recent reporting suggests the government may postpone AI regulation while preparing a more comprehensive bill addressing safety, copyright, transparency, and governance issues, potentially pushing legislation into 2026 or later [Metricstream.com].
Senator Marsha Blackburn has proposed the TRUMP AMERICA AI Act, which would codify federal preemption of state AI laws while preserving protections for areas like child safety and state government AI procurement. However, whether Congress can pass such legislation remains uncertain [Kiteworks.com].
The regulatory shift reflects broader concerns about AI competitiveness amid global technological rivalries. The Trump administration has launched Project Stargate, a $500 billion AI infrastructure initiative backed by private investment from OpenAI, Oracle, Japan's SoftBank, and UAE's MGX, signaling a push toward market-driven AI development rather than government mandates [Mind Foundry].
International developments are also shaping the regulatory landscape. The US Navy has banned DeepSeek AI usage among personnel due to security concerns, while discussions intensify in US and EU circles about stricter export controls on AI technologies to limit capabilities transfer to China [Mind Foundry].
State regulations have varied widely, covering areas from general AI systems to transparency requirements and deepfake restrictions. Some state laws have drawn criticism for potentially requiring ideological bias in AI models, with the administration citing Colorado's "algorithmic discrimination" law as potentially forcing AI models to produce false results to avoid differential treatment of protected groups [Whitehouse.gov].
The regulatory focus is expanding to AI companion chatbots, particularly those interacting with minors. The Federal Trade Commission launched an inquiry into AI chatbots in September 2025, while 42 state attorneys general expressed concerns about AI outputs potentially harming children in a November 2025 letter [Kiteworks.com].
As the regulatory landscape evolves, businesses operating AI systems in critical sectors including infrastructure, education, employment, and law enforcement face increasing scrutiny. The future of AI regulation remains uncertain as political and economic turbulence surrounding AI competition continues to influence policy decisions [Mind Foundry].