A look back at quarterly policy happenings and what they tell us about what’s next in 2026
In 2023 and 2024 we talked a lot about AI risk. But 2025 was the year governments actually started enforcing it. Around the world, regulators moved decisively from voluntary principles and ethical guidelines to binding obligations, audits, disclosures, and penalties. Healthcare, generative and agentic AI became focal points, while once-fragmented policy conversations hardened into law. For organizations building or deploying AI, governance was no longer a theoretical exercise, but an operational requirement.
This shift wasn’t subtle. The EU AI Act entered its first phase of enforcement. U.S. states passed sector-specific AI laws at record speed. Federal agencies issued detailed guidance for AI in healthcare, procurement, and safety-critical systems. International standards bodies released concrete frameworks for impact assessments, incident reporting, and accountability. In other words, compliance has become a fast-moving target that few internal teams can keep up with alone.
Organizations suddenly needed evidence of compliance — clear rules on acceptable use, defined incident reporting workflows, lifecycle controls, transparency artifacts, and global alignment across jurisdictions. To summarize, 2025 made it clear that responsible AI isn’t just about values anymore; it’s about proof. And that doesn’t come from a static policy document sitting on a shelf.
In the last year alone:
- 30+ countries enacted or expanded AI-specific frameworks this year
- 15 U.S. states introduced healthcare-specific AI transparency laws
- There was a 200% Increase in required “Incident Reporting” mandates compared to 2024.
Pacific AI’s First Year: Built for Responsible AI Use
Pacific AI launched in February 2025, right as this inflection point was becoming impossible to ignore. From day one, our mission has been to solve a problem we were seeing everywhere: organizations drowning in AI laws, frameworks, and standards, but lacking a unified way to operationalize them.
In our first year, we released quarterly updates to a free AI Governance Policy Suite that now consolidates 250+ laws, regulations, and industry standards across 30+ countries into a single, actionable framework. We expanded coverage from early U.S. and EU requirements to healthcare-specific governance, general-purpose AI obligations, incident reporting, whistleblower protections, copyright controls, and global compliance alignment.
The last year has only validated why Pacific AI exists. The velocity of regulation didn’t slow, and it won’t in 2026, either. But with the right governance foundation, organizations don’t have to choose between innovation and compliance. Pacific AI’s quarterly policy updates translate these fragmented requirements into a single, actionable governance framework used by enterprises worldwide.
Here’s a look at our policy year in review:
Q1 2025: A Regulatory Reset Takes Shape
The year began with dramatic shifts in the U.S. and abroad. At the federal level, a new executive order replaced the prior administration’s AI framework, signaling a pivot toward competitiveness and innovation. Regulators, like the FDA, released draft guidance governing AI used in drug development and medical devices.
States such as California, Utah, Illinois, and Minnesota expanded rules on AI transparency, healthcare decision-making, consumer protections, and employment discrimination. In parallel, the EU AI Act’s early obligations and the UK’s AI Opportunities Action Plan took effect.
Q1 established that AI compliance would no longer be optional or static. Organizations were confronted with diverging regional approaches including risk-based regimes in Europe and sector-specific, state-driven rules in the U.S., making ad hoc compliance strategies unsustainable.
Click here to see the full Q1 release notes.
Q2 2025: From Laws to Operations
In Q2, Pacific AI expanded its policy suite to incorporate newly enacted U.S. legislation, White House AI memoranda, deepfake laws, and a broad set of healthcare-specific frameworks (including WHO, TRIPOD-AI, SPIRIT-AI, and bias-mitigation principles). The update also introduced two major operational documents: an AI Incident Reporting Policy and a standalone AI Acceptable Use Policy, aligned with leading AI providers’ requirements.
Regulators began expecting proof, not promises, of responsible AI. Incident reporting, acceptable use controls, and lifecycle governance moved from best practice to baseline expectations, particularly for healthcare and generative AI systems.
Click here to see the full Q2 release notes.
Q3 2025: Global Expansion and Sector-Specific Enforcement
By Q3, the Pacific AI Governance Policy Suite expanded to cover more than 30 countries, aligning with the world’s largest economies. In the U.S., new healthcare AI laws emphasized transparency, patient consent, and limits on insurer use of automated decision-making. Federally, America’s AI Action Plan and updated FDA rules clarified expectations for software-based medical devices. New policies on AI copyright, whistleblower protections, and general-purpose AI (GPAI) models were added.
AI governance became unmistakably global and enforceable. Organizations operating across borders (or simply offering online services) now faced compliance obligations well beyond their home jurisdiction, especially for GPAI and healthcare use cases.
Click here to see the full Q3 release notes.
Q4 2025: Standardization and Accountability
The final quarter introduced international standards and landmark state laws. ISO/IEC 42005 formalized AI impact assessments, while the National Academy of Medicine released a comprehensive Health Care AI Code of Conduct. California enacted sweeping new laws covering frontier AI models, companion chatbots, and a unified legal definition of AI. The Colorado AI Act emerged as the first comprehensive U.S. law targeting “high-risk” AI systems, alongside expanded deepfake regulations nationwide.
Q4 signaled the maturation of AI governance. Impact assessments, whistleblower protections, contractual controls, and explicit safeguards for vulnerable users became central pillars of compliance, foreshadowing what future federal and international regulation is likely to require.
Click here to see the full Q4 release notes.
Looking Ahead to 2026
The lessons of 2025 are clear: AI regulation will continue to expand in scope, deepen in enforcement, and converge around transparency, risk management, and accountability. Organizations that rely on fragmented or reactive compliance approaches won’t just struggle to keep pace — they’ll pay for it, legally, financially, and reputationally.
As we enter 2026, the era of “voluntary ethics” has officially ended. As our CEO David Talby puts it, “The window for ‘experimentation without oversight’ has closed.This year, the winners won’t be the companies with the fastest models, but the ones with the most robust, defensible governance systems.”
From what we can see, here are four critical shifts that will define the next 12 months for every organization deploying AI:
- The Rise of “Agentic” Liability: As AI evolves from passive chatbots to autonomous agents capable of executing tasks, 2026 will see the first major legal tests of liability. Organizations will be held responsible not just for what their AI says, but for what it does, making automated guardrails and real-time monitoring an operational necessity.
- Enforcement at Scale: With the EU AI Act’s high-risk provisions becoming fully applicable in August 2026, and U.S. State Attorneys General actively using new consumer protection powers, we expect a shift from policy-writing to high-stakes enforcement. Compliance is no longer a “check-the-box” exercise; it is a litigation-shielding requirement.
- The “De-Risking” of Healthcare AI: In the wake of new FDA and ONC transparency mandates, “black box” algorithms will face a market exodus. Healthcare providers will prioritize Explainable AI and prefer vendors who can provide a “nutrition label” for every model.
- From CIO to Chief Governance Officer: AI governance is graduating from a technical IT task to a board-level mandate. In 2026, “governance debt” will be viewed as a financial liability as significant as technical debt, driving a massive surge in demand for automated policy-as-code solutions.

