The White House issued an executive order on December 11, 2025, establishing federal policy to preempt state AI regulations that officials argue obstruct national competitiveness and innovation. The order directs the Attorney General to create an AI Litigation Task Force to challenge state laws on constitutional grounds, including improper regulation of interstate commerce [Whitehouse.gov].
The executive order specifically targets what it describes as problematic state-level regulations. "State-by-State regulation by definition creates a patchwork of 50 different regulatory regimes that makes compliance more challenging, particularly for start-ups," according to the administration's policy framework [Whitehouse.gov]. The order singles out Colorado's law banning "algorithmic discrimination," claiming it may force AI models to produce false results to avoid differential treatment of protected groups.
However, the federal preemption won't apply to all state AI laws. The order preserves state authority over child safety protections, AI compute and data center infrastructure, and state government procurement and use of AI systems [Whitehouse.gov].
The move comes as 2025 saw significant state-level AI legislation activity. "State lawmakers were busy passing bipartisan laws aimed at election deepfakes, algorithmic discrimination, consumer scams, and the use of AI in sensitive domains like health care and education," while Congress failed to take comprehensive action beyond the TAKE IT DOWN Act addressing nonconsensual intimate images [Techpolicy.press].
Experts predict 2026 will be crucial for AI governance globally. "The stage is set for important political and legal battles that will play out in 2026 and will define who controls AI, who bears the costs of its harms, and whether democratic governments and regulators can keep pace," according to policy analysts [Techpolicy.press].
The federal action includes economic pressure tactics. The order conditions certain federal funding on states not enacting conflicting AI laws and directs the FTC to issue guidance on when state laws mandating alterations to AI outputs are preempted by federal prohibitions [Pearlcohen.com].
Industry influence appears significant in shaping the debate. "Big Tech poured hundreds of millions of dollars into newly formed super PACs that will target lawmakers who advance AI laws," with Republicans receiving nearly 75 percent of recent tech-backed political donations [Techpolicy.press].
The enforcement focus is shifting from voluntary guidelines to binding regulations. "2026 will be the year of enforcement and 'red lines,' not just new declarations," with key questions about whether governments will prohibit certain applications like biometric mass surveillance and autonomous weapons [Techpolicy.press].
Globally, the EU AI Act continues to influence regulatory trends, with countries worldwide developing various approaches to AI governance. "There is no standard approach toward bringing AI under state regulation, however, common patterns toward reaching the goal of AI regulation can be observed," according to international policy trackers [Iapp.org].
The success of federal preemption efforts may depend on congressional action. Currently, no leading bipartisan bill comprehensively addresses the risks targeted by state laws, though legislation like the Artificial Intelligence Research, Innovation, and Accountability Act may be reintroduced [Hklaw.com].
As real-world AI harms accumulate, the tension between innovation and regulation continues to intensify, making 2026 a pivotal year for determining the future of AI governance in the United States and globally.