The Trump administration has moved to centralize artificial intelligence regulation under federal authority, signing an executive order in December 2025 that aims to prevent states from imposing conflicting AI rules, according to multiple regulatory trackers.
The executive order, titled "Ensuring a National Policy Framework for Artificial Intelligence," establishes a litigation task force and directs federal agencies to review state and federal laws that could restrict AI development [The Regulatory Review]. The measure even authorizes withholding federal funds from states that don't comply with federal AI policy.
"The order seeks to ensure that U.S. AI companies can innovate without 'cumbersome regulations' and emphasizes preventing states from regulating AI systems in ways that extend beyond state borders," according to analysis from The Regulatory Review.
This federal push comes as several states have advanced their own AI legislation. Colorado's AI Act, set to take effect June 30, 2026, will place substantial new responsibilities on AI developers and deployers, including requirements to avoid algorithmic discrimination and conduct impact assessments [Wilson Sonsini Goodrich & Rosati].
The Trump administration's broader AI strategy was outlined in July 2025 with "Winning the Race: America's AI Action Plan," which identifies over 90 federal policy actions across three pillars: accelerating innovation, building American AI infrastructure, and leading in international diplomacy and security [Software Improvement Group].
Meanwhile, the healthcare sector faces significant regulatory changes. The U.S. Department of Health and Human Services published a request for information in late 2025 on accelerating AI adoption in clinical care and "is expected to take action based on this RFI feedback" in 2026 [Wilson Sonsini Goodrich & Rosati].
Internationally, the UK has taken a different approach by reintroducing the Artificial Intelligence (Regulation) Bill in March 2025. If passed, the legislation would create a central "AI Authority" to oversee AI governance and define business obligations around safety, accountability, and governance principles [Metricstream].
The UK bill also proposes requiring businesses to designate AI officers for compliance and enabling "AI sandboxes" - controlled testing environments with regulatory supervision that allow innovation under relaxed rules while maintaining oversight [Metricstream].
The global regulatory landscape reflects the scale of AI governance activity, with the OECD's AI Policy Observatory hosting "a repository of 1,000+ AI policies across 70+ jurisdictions," according to industry analysis [Software Improvement Group].
Experts note that while the EU leads in comprehensive AI regulation with its AI Act, "approaches vary widely" globally, ranging from cross-sector laws to sector-specific rules and non-binding guidelines [Software Improvement Group].
The tension between federal and state approaches in the U.S. highlights broader questions about AI governance as the technology becomes increasingly integrated into daily life. The Federal Communications Commission has been tasked with considering a national reporting standard for AI models that would override conflicting state requirements [The Regulatory Review].
As 2026 progresses, businesses face navigating this complex regulatory environment, with new state laws taking effect while federal agencies work to establish unified national standards for AI development and deployment.