President Trump signed a comprehensive executive order on December 11, 2025, establishing a new federal AI policy framework that signals a significant shift toward deregulation and federal preemption of state-level AI laws [Credo.ai].
The executive order, titled "Ensuring a National Policy Framework for Artificial Intelligence," revokes the previous 2023 Executive Order 14110 and tasks federal advisors with developing a pro-innovation AI action plan within 180 days [Anecdotes.ai]. Key provisions include the creation of an AI Litigation Task Force to challenge state laws that conflict with federal objectives, assessment of state regulations by the Department of Commerce, and potential loss of federal funding for states with conflicting AI rules [Credo.ai].
Global Regulatory Divergence
While the U.S. moves toward deregulation, other nations are implementing varying approaches to AI governance. Japan enacted the AI Promotion Act in May 2025, featuring light-touch regulation that encourages corporate cooperation with government safety measures and allows public disclosure of companies using AI to violate human rights [IAPP].
China has implemented AI Labeling Rules requiring service providers to add explicit and implicit labels to AI-generated content, reflecting the country's more control-oriented regulatory approach [IAPP].
The European Union faces internal tensions over its AI Act implementation. The European Commission released a Digital Omnibus proposal in November 2025, suggesting delays to high-risk AI system provisions due to implementation challenges including lack of harmonized standards and delayed designation of competent authorities [IAPP].
Innovation vs. Regulation Debate
Several countries are expressing concerns about over-regulation's impact on innovation. Australia's Productivity Commission warned about the "chilling effect" that burdensome regulation may have on investment, emphasizing the importance of pursuing regulatory goals at the lowest cost to innovation [IAPP].
Similarly, Canada's Competition Bureau published findings indicating that AI-specific regulation can hinder innovation, impose growth burdens, and create barriers for startups [IAPP].
State-Level Developments
Despite federal preemption efforts, U.S. states continue pursuing targeted AI legislation. Utah's Artificial Intelligence Policy Act establishes liability for companies failing to disclose generative AI use when required, while New York and Florida have introduced legislation regulating AI-driven hiring algorithms and bias mitigation [Mind Foundry].
This creates what experts describe as a "compliance splinternet" where identical AI features face different regulatory treatment across jurisdictions [Atomicmail.io].
Security Concerns Rise
The regulatory landscape is further complicated by national security considerations. The U.S. Navy has banned DeepSeek AI usage among personnel due to security concerns, while discussions intensify over stricter export controls on AI technologies to limit capability transfers to China [Mind Foundry].
Looking Ahead
As 2026 approaches, the regulatory environment appears increasingly fragmented, with different jurisdictions pursuing rights-first, innovation-first, or control-first models. This divergence forces businesses to navigate varying compliance requirements while governments struggle to balance innovation promotion with risk mitigation [Atomicmail.io].
The clash between federal and state AI policies in the U.S., combined with varying international approaches, suggests that regulatory uncertainty will continue challenging AI developers and deployers globally.