Pacific AI Governance Policy Suite: Q3 2025 Release Notes

The Q3 2025 release of the Pacific AI Governance Policy Suite is now available. This is company’s flagship toolkit for organizations navigating the fast-changing global AI regulatory landscape. It now covers 250+ laws, regulations, and industry standards and is available for free.

Pacific AI continuously monitors legal, regulatory, and judicial updates across the US, EU, and other jurisdictions, as well as guidance from leading organizations such as OECD, CSET, NIST, ISO, and others. Once a quarter, we update the AI Governance Policy Suite to help organizations stay in compliance. This saves organizations man-years of ongoing effort by doing three things they are required to do:

  • Tracking and collecting all new AI legislation, regulation, and frameworks.
  • Unifying all de-duplicating requirements across jurisdictions and laws.
  • Translating the requirements into one actionable set of policies.

This article summaries key updates in the Q3 2025 release.

1. Expanded Coverage to 30+ Countries

While previous releases of the Pacific AI Governance Policy Suite covered the United States, this release expands it to cover more than 30 major world economies. This was a major legal undertaking, and we’re happy to announce that the current policy suite now conforms to the laws & regulations of the following jurisdictions:

United States, European Union, Canada, Japan, Taiwan, South Korea, India, Australia, Brazil, Mexico, Indonesia, Turkey, Saudi Arabia, Argentina, Israel, United Arab Emirates, Thailand, Switzerland, and Norway.

This list of countries was chosen to cover the world’s 30 largest economies, with the exception of Russia and China. If you operate internationally, or may serve international users by making your services available online, this policy suite can materially accelerate your path to compliance.

2. New US Healthcare AI Legislation

The updated Pacific AI Governance Policy Suite covers recently enacted AI-related healthcare legislation in the United States addressing transparency, patient safety, and accountability in how AI is designed and deployed across clinical, administrative, and insurance contexts in the US as of September 2025.

The United States has been rapidly enacting new state laws governing the use of AI in health care. Three major 2025 legislative trends stand out:

First US Trend: AI Transparency in Health Care

Texas has just signed into law HB 149, Texas Responsible Artificial Intelligence Governance Act (so-called TRAIGA), that seeks to regulate AI systems in Texas and establish civil penalties for violations. This law requires individuals to be informed and give consent for the capture or storage of their biometric data for commercial purposes, particularly when sourced from publicly available media. It introduces a new subtitle dedicated to AI protection, emphasizing consumer disclosures, prohibitions against manipulating human behavior, and restrictions on social scoring by government entities.

Further, it mandates that providers leveraging AI systems for health care services or treatments provide disclosure to the patient (or their representative) no later than the date that the service or treatment is first provided. Another Texas SB 1188 introduces new regulations regarding electronic health records (EHR) in Texas.It mandates that covered entities must ensure that electronic health records containing patient information are stored within the United States or its territories.

Nevada AB 406 prohibits AI providers from indicating that an AI system is capable of providing professional mental or behavioral health care, while Oregon HB 2748 prohibits AI providers to indicate as nurses or similar titles.

These measures reflect growing concern about patient safety, informed consent, and ethical deployment of AI tools in health care.

Second US Trend: Disclosure of AI Use

AI-powered chatbots and virtual assistants are increasingly being used in both clinical and administrative workflows. States are now requiring explicit disclosure when patients interact with AI systems:

  • Utah HB 452, New York SB 3008, Nevada AB 406: Require disclosure when AI chatbots are in use.
  • California AB 3030: Requires all AI-generated patient communications to include a disclaimer and instructions for contacting a human health provider.

This shift prioritizes transparency so patients can make informed choices when communicating with care providers.

Third US Trend: Payer Use of AI in Insurance Decisions

State lawmakers are also responding to concerns about AI in health insurance, especially in medical necessity determinations and prior authorization processes.

Several states have passed laws to ensure AI cannot be the sole decision-maker in patient care coverage:

  • Arizona HB 2175
  • Maryland HB 820
  • Nebraska LB 77
  • Texas SB 815

These laws require that AI decisions be reviewed by a physician—or explicitly prohibit insurers from replacing physician or peer review with AI alone.

In addition to the above three trends, the AI Governance Policy Suite has also been edited to conform with the Virginia Artificial Intelligence Developer Act, California Artificial intelligence Law and other newly enacted state laws.

3. New US Federal Regulation

FDA Final Rule codifying 21 CFR Part 892 on medical and other devices; as well as Diagnosis Software consolidates FDA’s regulatory expectations for software-based medical devices. As a result, businesses understand what design, labelling, performance, and quality system standards apply to their products.

America’s AI Action Plan was signed in July 2025. This strategic document outlines the US roadmap for global AI leadership build upon three core pillars: Accelerating AI Innovation; Building AI Infrastructure and Leading in International Diplomacy and Security.

The federal government is now directed to only procure AI systems “free of ideological bias,” explicitly excluding models emphasizing DEI or social justice agendas. The General Services Administration launched USAi, a program to quickly bring mission-ready AI tools into federal agencies, accelerating adoption while bolstering security.

4. New AI Copyright Policy

In addition to the expanded “Covered Laws, Regulations, Frameworks & Standards,” the September 2025 release introduces a new policy. In response to the requirements of the EU AI Act, we have introduced a standalone AI Copyright Policy into the Pacific AI Governance Suite. This policy addresses the intersection of AI and intellectual property rights by setting out clear obligations for organizations, including:

  • Responsible Data Collection: Rules for deploying web crawling systems in line with copyright and database protections.
  • Risk Mitigation: Controls to reduce the risk of AI systems producing infringing or unauthorized outputs.
  • Complaint Mechanism: A structured process for handling copyright-related complaints.

5. Controls for Whistleblower Protection

A new chapter titled Whistleblower Policy has been added, requiring organizations to enable secure and anonymous reporting of AI risks.

This is required by the EU AI Code of Practice, by some industry standards, and by a newly introduced bipartisan bill in the US Congress (“US AI Whistleblower Protection Act”).

6. Controls for General-Purpose AI Models (GPAI)

Organizations that provide general-purpose AI models are required to provide a heightened level of transparency, safety, copyright, and monitoring beyond what’s required of ‘narrow’ uses of AI. This is reflected in several US state laws, the EU AI Act and Code of Practice, and is being adopted by other countries.

This release of the policy suite has been updated to conform to these requirements, across all the jurisdictions we cover. Controls that are specific to GPAI models are separated to their own clauses in each policy, to make it easier for organizations to know when they apply.

7. Clarifications Across the Policy Suite

In this Q3 2025 release, we have provided clarifications and refinements across the AI Governance Policy Suite to ensure consistency and ease of implementation. Key updates include:

  • Expanded Roles & Responsibilities: New obligations have been introduced for the AI Risk Manager and AI Risk Compliance Officer, strengthening accountability and oversight within organizations.
  • AI System Inventory Management: Clearer guidance has been added on how organizations should maintain and update an inventory of AI systems, ensuring traceability and compliance with regulatory requirements.
  • Streamlined Recommendations: Various sections of the Suite have been refined to improve alignment with evolving legal and regulatory standards, making the policies more actionable in day-to-day operations.

8. Next Steps & Adoption Guidance

To fully leverage the enhanced Q3 2025 Policy Suite, organizations should:

  • Review New Frameworks & Laws:
    Assign subject-matter leads (e.g., clinical research, legal compliance, procurement teams) to evaluate how the new US natioinal, fededal and local laws and regulations.
  • Review laws across major jurisdictions:
    Create cross-functional oversight for AI laws in target markets.
  • Incorporate AI Copyright Policy:
    Reflect obligations related to deploying web crawling systems, mitigating risk of infringing outputs, and complaint mechanism.
  • Establish Whistleblower procedure:
    Enable secure and anonymous reporting of AI risks, and decide how your organization can reward employees for internal reporting.
  • Stay Compliant:
    Incorporate all recently suggested improvements to your AI Governance Policy Suite.
  • Communicate & Train:
    Update internal training materials to include the latest additions and host workshops for AI governance teams.
  • Self-Attest & Certify:
    Once the updates are adopted, organizations may contact Pacific AI at [email protected]. We will quide on you for you can obtain a written confirmation of compliance to receive an updated “AI Governance Badge” reflecting Q3 2025 coverage.