Pacific AI Governance Policy Suite: Q2 2025 Release Notes

June 2025

In our March 2025 release (“2025-A”), we consolidated a broad set of laws, regulations, frameworks, and standards into a unified policy suite to guide organizations in developing and deploying AI systems responsibly. With our Q2 2025 update (“2025-B”), we have expanded coverage to include newly enacted legislation, additional healthcare-specific guidance, and several operational policies that reflect evolving best practices in AI governance.

Below, we detail each new source added in the June 2025 release that was not included in March, organized by category.

1. Newly Covered Healthcare Guideline Frameworks

TRIPOD-AI – Updated Guidance for Reporting Clinical Prediction Models

Added in June 2025: TRIPOD-AI builds on the original TRIPOD-LLM checklist by extending transparent reporting requirements to all clinical prediction models that use regression or machine learning methods. It ensures authors disclose data sources, model development procedures, and validation results for both technical and clinical audiences.

SPIRIT-AI – Guidelines for AI-Related Clinical Trial Protocols

Added in June 2025: SPIRIT-AI defines minimum protocol items for randomized controlled trials involving AI interventions. It specifies requirements for model description, input data, training/validation details, risk management, and data monitoring to align with existing SPIRIT guidelines.

WHO – Generating Evidence for AI-Based Medical Devices

Added in June 2025: This WHO framework provides a rigorous pathway for generating evidence on AI/ML-enabled medical devices, guiding stakeholders through training data curation, validation study design, and post-market evaluation to satisfy regulatory and clinical requirements.

HAIRA – Advancing Healthcare AI Governance: A Comprehensive Maturity Model

Added in June 2025: HAIRA defines a staged maturity model for AI governance in healthcare organizations, assessing domains such as policy alignment, transparency, stakeholder engagement, and risk management. By offering a self-assessment tool, HAIRA helps institutions benchmark and improve their AI governance practices over time.

TPLC – Total Product Lifecycle Framework for Healthcare AI/ML

Added in June 2025: TPLC adapts the FDA’s Total Product Lifecycle concept to AI/ML-enabled healthcare solutions. It emphasizes continuous monitoring, real-world performance tracking, and iterative updates across pre-market, deployment, and post-market phases, ensuring AI systems remain safe, effective, and equitable.

OPTICA – Organizational Perspective Checklist for AI Solutions Adoption

Added in June 2025: OPTICA is a practical checklist designed for health systems evaluating AI solutions. It covers organizational readiness (IT infrastructure, workflow integration), clinician training, data governance, and outcome measurement, helping healthcare organizations systematically assess non-technical factors critical to successful AI deployment.

SALIENT – End-to-End Clinical AI Implementation Framework

Added in June 2025: SALIENT (Systematic Approaches to Learning, Implementation, Evaluation, and Translation) offers guidance for every stage of clinical AI, from early feasibility studies to large-scale rollouts. It emphasizes iterative testing, stakeholder feedback loops, and ongoing governance to ensure AI tools deliver clinical value and remain aligned with patient safety.

AHRQ & AIMHD Guiding Principles to Address Algorithm Bias

Added in June 2025: Developed collaboratively by the Agency for Healthcare Research and Quality (AHRQ) and the Alliance for Integrated Monitoring of Healthcare Disparities (AIMHD), these principles offer concrete steps to identify, measure, and mitigate bias in AI-driven risk prediction and decision support tools, with particular attention to historically marginalized populations.

‘Model Facts’ Label for HTI-1 Compliance by Duke Institute for Health Innovation

Added in June 2025: The “Model Facts” label is a standardized one-page document that summarizes key information about AI models—including intended use, population characteristics, performance metrics, and known limitations—to satisfy HHS HTI-1 algorithm transparency requirements and facilitate clinician and patient trust.

2. New US National Legislation

The Take it Down Act (S.146)

Added in June 2025: This federal law requires technology platforms to promptly remove certain forms of nonconsensual intimate images and content categorized as “revenge porn.” It mandates takedown procedures, notice to affected individuals, and periodic reporting to Congress on compliance. Although not AI-specific, its removal obligations intersect with AI tools that generate, moderate, or distribute visual content.

3. New US Federal Regulation

Memorandum on Accelerating Federal Use of AI

Added in June 2025: This document, issued by the White House, directs all Executive Branch agencies to (a) inventory their existing AI initiatives, (b) establish AI innovation hubs, and (c) adopt common federal AI governance standards—emphasizing transparency, equity, and security. It also outlines funding mechanisms for AI R&D within government.

Memorandum on Driving Efficient Acquisition of AI in Government

Added in June 2025: This memorandum provides agencies with procurement guidance to streamline purchasing of AI/ML products and services. It encourages the use of shared services, modular licensing, and pre-negotiated contract vehicles to avoid vendor lock-in and ensure continuous security and compliance monitoring.

4. Newly Covered Acceptable Use Policies by Major Providers

OpenAI Usage Policy

Defines prohibited content (e.g., hate speech, illicit behavior, disallowed political campaigning), outlines user obligations around data retention and privacy, and details mechanisms for reporting misuse.

Anthropic Usage Policy

Specifies categories of disallowed content (e.g., personal data extraction, weaponization, extreme political persuasion), along with requirements for developers around usage monitoring and red teaming.

Microsoft Enterprise AI Services Code of Conduct

Outlines acceptable uses of Azure OpenAI Service, including prohibitions on illegal, infringing, or malicious applications; mandates adherence to Microsoft’s Responsible AI Standard for fairness, reliability, safety, and privacy.

AWS Responsible AI Policy

Details AWS’s terms for AI/ML services (e.g., SageMaker), including data classification requirements, customer obligations to secure training data, and a list of disallowed practices (deepfakes, malware generation, surveillance use cases).

Google Generative AI Prohibited Use Policy

Enumerates forbidden use cases for Google’s Vertex AI and Generative AI Studio (e.g., impersonation, disallowed sexual content, targeted political microtargeting), and mandates that developers apply Google’s AI Principles around fairness, privacy, and accountability.

Meta Seamless Acceptable Use Policy

Lays out Meta’s rules for using Llama and other Meta AI models, including prohibitions on violence, child sexual exploitation content, harassment, hate speech, and disallowed data collection practices.

Cohere Labs Acceptable Use Policy

Defines Cohere’s restrictions on content generation (e.g., plagiarism, hate, disinformation, defamation), requirements for data privacy, and guidelines for content moderation in user applications.

5. Updates to US State & Local Legislation – Deepfake Laws

Nebraska LB 383 – Prohibition of Generated Child Sexual Abuse Material

Added in June 2025: Nebraska’s new statute explicitly bans the creation, distribution, or possession of AI-generated child sexual abuse material (CSAM), aligning with federal legislation and other states’ deepfake-related statutes. It imposes felony penalties and requires ISPs to remove flagged content within 48 hours.

6. Changes in US State & Local Legislation – Privacy Laws

The June 2025 update harmonizes several privacy laws, but no net new statutes have been added beyond those included in March 2025. The key difference is that our privacy category’s numbering and presentation have been updated; the same 16 state-level consumer privacy and AI-specific statutes (e.g., Utah AI Policy Act, Colorado SB 22-113) remain in scope.

7. Newly Covered International Standards

OECD Framework for the Classification of AI Systems

Added in June 2025: The OECD published a standardized taxonomy for AI systems, classifying them by modality (e.g., perception, reasoning, interaction), domain, and risk level—supporting cross-border regulatory alignment and data sharing for incident reporting.

OECD Common Reporting Framework for AI Incidents

Added in June 2025: This framework establishes a minimum dataset and ontology for reporting AI-related incidents (e.g., bias events, security breaches, safety failures) to national authorities, helping policymakers track trends and coordinate response.

CSET’s AI Incidents Key Components for a Mandatory Reporting Regime

Added in June 2025: The Center for Security and Emerging Technology (CSET) proposed a set of “key components” (e.g., incident taxonomy, timelines, organizational context) that could underpin a legally mandated AI incident reporting regime—informing U.S. congressional discussions.

UNESCO’s Ethical Impact Assessment

Added in June 2025: UNESCO released an Ethical Impact Assessment (EIA) toolkit that guides organizations through a structured evaluation of social, cultural, and environmental impacts of AI deployments. It complements existing ethical AI principles by providing actionable steps for risk identification and stakeholder engagement.

8. Summary of Operational Policy Additions

In addition to the expanded “Covered Laws, Regulations, Frameworks & Standards,” the June 2025 release introduces two brand-new policies. These are standalone documents in the suite (with dedicated Table of Contents entries) to address emergent needs in incident management and acceptable use:

New: AI Incident Reporting Policy

  • Purpose: Defines internal and external AI incident reporting requirements.
  • Scope: Covers all “incidents” and “near misses” (e.g., bias events, safety failures, privacy breaches, misuse).
  • Internal Reporting: Mandates that every team member report AI incidents to a centralized Incident Management System within 24 hours.
  • External Reporting: Identifies when incidents must be escalated to regulators or affected stakeholders (e.g., HHS, FTC, state attorneys general).
  • Incident Classification: Introduces a tiered classification scheme (e.g., “Level 1: negligible,” “Level 2: moderate,” “Level 3: severe or potential serious harm”).
  • Key Additions: Reporting near misses proactively, aligning with OECD’s Common Reporting Framework, and incorporating CSET’s incident key components.

New: AI Acceptable Use Policy

  • Purpose: Articulates conditions under which AI systems may be used or deployed.
  • Scope: Applies to all employees, contractors, business partners, and customers who access or license Pacific AI–produced AI systems.
  • Unacceptable Uses:
  • 1. Human Rights, Civil Liberties, and Safety:
    – Autonomous weapons, predictive policing, social scoring, invasive surveillance, stalking systems.

    2. Misinformation, Influence, and Deception:
    – Electoral manipulation, deepfake generation for deceptive political ads, coordinated disinformation campaigns.

    3. Data Privacy, Consent, and Security:
    – Unconsented biometric categorization, illicit data scraping, unauthorized profiling.

    4. Discrimination and Unfair Outcomes:
    – Use of AI to deny services based on protected characteristics, noncompliant credit scoring, algorithmic refusal of critical healthcare.

    5. Intellectual Property and Ethical Content Generation:
    – Automated content generation that infringes copyrights, plagiarizes, or promotes toxic or offensive speech.

    6. Safety and Misuse Prevention:
    – Systems designed to facilitate violence, create illicit weapons blueprints, generate CSAM.

  • Enforcement & Review: Describes disciplinary measures, automated tooling for policy enforcement (e.g., content filters), and quarterly policy reviews to incorporate new provider policies.

9. Minor Editorial Refinements

Aside from the additions above, some sections in the June 2025 release were renumbered or restructured for clarity:

  • “Frameworks and Standards” and “US State & Local Legislation – Privacy Laws” were renumbered to accommodate the new “Acceptable Use Policies” section.
  • Policy documents (Risk Management, System Lifecycle, Safety, Privacy, Fairness, Transparency) largely remain the same in scope, with minor editorial updates (e.g., clarifications on “AI Governance Officer” duties, updated reference links). Those internal policy changes do not introduce new legal or regulatory sources.

Next Steps & Adoption Guidance

To fully leverage the enhanced Q2 2025 Policy Suite, organizations should:

1. Review New Frameworks & Laws:
– Assign subject-matter leads (e.g., clinical research, legal compliance, procurement teams) to evaluate how the new healthcare frameworks (e.g., SPIRIT-AI, HAIRA, AHRQ/AIMHD bias principles) and laws (e.g., Take it Down Act, Nebraska LB 383) affect existing processes or require updates.

2. Incorporate Acceptable Use Policies:
– Update contracts, SLAs, or terms of service to reflect prohibited use cases mandated by leading AI providers. Ensure developers and end users are aware of new restrictions on content generation, data handling, and model deployment.

3. Establish Incident Reporting Workflows:
– Build or enhance incident management systems to capture “near misses” and incidents per the new AI Incident Reporting Policy. Define roles and responsibilities, ensuring timely escalation to compliance, legal, and—if needed—regulatory bodies.

4. Communicate & Train:
– Update internal training materials to include the latest additions (e.g., “Model Facts” labeling, equity-focused AHRQ/AIMHD principles). Host workshops for AI governance teams to review the new maturity-model (HAIRA) and lifecycle considerations (TPLC, SALIENT).

5. Self-Attest & Certify:
– Once the updates are adopted, organizations may contact Pacific AI ([email protected]) with written confirmation of compliance to receive an updated “AI Governance Badge” reflecting Q2 2025 coverage.