The global artificial intelligence regulatory landscape underwent significant shifts in 2025, with major jurisdictions adopting markedly different approaches that are creating compliance challenges for multinational businesses.
US Pivots to Federal Innovation Framework
President Trump signed a pivotal executive order on December 11, 2025, titled "Ensuring a National Policy Framework for Artificial Intelligence," which fundamentally reoriented US AI policy toward deregulation and federal preemption [Anecdotes.ai]. The order revoked the previous administration's 2023 Executive Order 14110 and aimed to eliminate federal policies perceived as barriers to AI innovation.
The new framework tasks senior White House officials with developing an AI action plan within 180 days, emphasizing "pro-innovation, pro-competitiveness" policies [SIG]. Crucially, the order centralizes AI regulation under federal authority, preventing states from imposing separate or conflicting AI rules [Metricstream].
However, the federal-state tension remains complex. "States continue to pass enforceable AI rules that start taking effect in 2026," creating ongoing jurisdictional challenges despite federal preemption efforts [SIG].
EU Implementation Faces Delays
In contrast to the US acceleration, the European Union's AI Act faces significant implementation hurdles. The European Commission released a Digital Omnibus proposal in November 2025 acknowledging "delays in designating competent authorities" and "a lack of harmonized standards for high-risk AI requirements" [IAPP].
The proposal suggests postponing high-risk AI system enforcement to align with the availability of compliance tools, while reducing documentation requirements for small and medium enterprises [IAPP]. Originally scheduled provisions are now pushed to August 2026 for most high-risk AI systems and August 2027 for safety-critical applications [Anecdotes.ai].
Global Regulatory Divergence Creates 'Compliance Splinternet'
The contrasting approaches are creating what experts term a "compliance splinternet," where "the same AI feature can be acceptable in one place and risky in another" [Atomicmail]. While the US pursues innovation-first policies, other jurisdictions maintain stricter frameworks.
Japan enacted its AI Promotion Act in May 2025, taking a "light touch" approach that encourages voluntary cooperation with government safety measures [IAPP]. China implemented AI Labeling Rules requiring service providers to mark AI-generated content explicitly [IAPP].
Meanwhile, reports from Australia's Productivity Commission and Canada's Competition Bureau warned against over-regulation, highlighting "the chilling effect that burdensome regulation may have on investment" and noting that AI-specific rules "can hinder innovation" [IAPP].
Looking Ahead to 2026
Regulatory experts predict 2026 will bring enforcement reality checks rather than new legislation. "Expect AI regulation news in 2026 to feel less like 'new laws' and more like hard enforcement of messy reality," with regulators focusing on "scalable harm" and targeting system deployers, not just developers [Atomicmail].
The divergent regulatory approaches reflect fundamental disagreements about balancing innovation with risk management, creating an increasingly complex compliance environment for global AI businesses. The OECD's AI Policy Observatory now tracks over 1,000 AI policies across 70+ jurisdictions, underscoring the scale of regulatory fragmentation [SIG].