A regulatory tug-of-war over artificial intelligence governance is intensifying in 2026, as federal authorities move to challenge state-level AI laws while international frameworks like the European Union's AI Act advance toward full implementation.
The Trump administration issued an executive order on December 11, 2025, explicitly seeking to establish "a minimally burdensome national standard" for AI regulation and instructing the Department of Justice to sue states over what it considers unconstitutional AI regulations [White House]. The order directs the Secretary of Commerce to evaluate "burdensome" state AI laws within 90 days, with laws in California, New York, and Colorado reportedly under scrutiny [Wilson Sonsini Goodrich & Rosati].
This federal pushback comes as states have been actively pursuing AI governance. US states passed 82 AI-related bills in 2024, with Colorado initially leading the charge with what policy analysts called the "first comprehensive and risk-based approach" to AI accountability [Nature, University of Denver].
However, Colorado has since "pumped the brakes" on its pioneering AI regulation, with stakeholders citing concerns about striking a workable balance between protecting consumers and avoiding burdens that could stifle innovation [University of Denver]. Assistant Professor Stefani Langehennig notes that "regulations need to protect people from unfair or unclear AI decisions without creating such heavy burdens that businesses hesitate to build or deploy new tools."
Meanwhile, the EU AI Act continues its phased rollout, with new requirements for high-risk AI systems and transparency obligations taking effect by August 2, 2026 [Wilson Sonsini Goodrich & Rosati]. This creates a complex compliance environment for enterprises operating across jurisdictions.
"There is a growing international consensus" on AI regulation, according to Nature, with authorities in China taking AI regulation "extremely seriously" and the African Union publishing continent-wide AI policy guidance in 2024. However, the publication notes "notable cold spots" in regulatory activity, particularly in low and lower-middle-income countries.
Policy experts predict the federal-state tension will continue throughout 2026. "The tug of war between states and the federal government will continue," according to analysis from TechPolicy.Press, which suggests "federal policymakers should be learning from states' best proposals" rather than attacking state regulation.
For enterprises, this regulatory fragmentation presents significant challenges. Industry analysts emphasize that "overlapping requirements across jurisdictions raise compliance costs and operational complexity," requiring companies to treat compliance as part of AI system design rather than a downstream legal consideration [Credo AI].
The healthcare sector faces particular complexity, with the Department of Health and Human Services advancing AI innovation initiatives while the FDA publishes new guidance that could reduce regulatory oversight for some AI-enabled technologies [Wilson Sonsini Goodrich & Rosati].
As the regulatory landscape evolves, experts stress the importance of proactive AI governance. Clear ownership structures, comprehensive documentation, and robust monitoring systems are becoming essential for organizations seeking to scale AI use while maintaining compliance with emerging rules across multiple jurisdictions.