{"id":975,"date":"2025-06-03T16:37:18","date_gmt":"2025-06-03T16:37:18","guid":{"rendered":"https:\/\/pacific.ai\/staging\/3667\/?p=975"},"modified":"2026-03-12T14:14:19","modified_gmt":"2026-03-12T14:14:19","slug":"pacific-ai-governance-policy-suite-q2-2025-release-notes","status":"publish","type":"post","link":"https:\/\/pacific.ai\/staging\/3667\/pacific-ai-governance-policy-suite-q2-2025-release-notes\/","title":{"rendered":"Pacific AI Governance Policy Suite: Q2 2025 Release Notes"},"content":{"rendered":"<div id=\"bsf_rt_marker\"><\/div><p><strong>June 2025<\/strong><\/p>\n<p>In our March 2025 release (\u201c2025-A\u201d), we consolidated a broad set of laws, regulations, frameworks, and standards into a unified policy suite to guide organizations in developing and deploying AI systems responsibly. With our Q2 2025 update (\u201c2025-B\u201d), we have expanded coverage to include newly enacted legislation, additional healthcare-specific guidance, and several operational policies that reflect evolving <a title=\"AI governance implementation\" href=\"https:\/\/pacific.ai\/staging\/3667\/ai-governance-implementation\/\">best practices in AI governance<\/a>.<\/p>\n<p>Below, we detail each new source added in the June 2025 release that was not included in March, organized by category.\u00a0These additions reflect our ongoing commitment to transparency, traceability, and compliance\u2014core principles that support effective <a href=\"https:\/\/pacific.ai\/staging\/3667\/what-is-governance-for-generative-ai\/\">generative ai governance<\/a> across evolving datasets and models.<\/p>\n<h2>1. Newly Covered Healthcare Guideline Frameworks<\/h2>\n<p><a href=\"https:\/\/www.bmj.com\/content\/385\/bmj-2023-078378\" target=\"_blank\" rel=\"noopener\">TRIPOD-AI &#8211; Updated Guidance for Reporting Clinical Prediction Models<\/a><\/p>\n<p><strong>Added in June 2025:<\/strong> TRIPOD-AI builds on the original TRIPOD-LLM checklist by extending transparent reporting requirements to all clinical prediction models that use regression or machine learning methods. It ensures authors disclose data sources, model development procedures, and validation results for both technical and clinical audiences. An <a title=\"ai governance audit\" href=\"https:\/\/pacific.ai\/staging\/3667\/what-is-a-responsible-ai-audit\/\">AI governance audit<\/a> can be instrumental in ensuring that clinical prediction models adhere to the transparency requirements set by frameworks like TRIPOD-AI and other healthcare-specific guidelines.<\/p>\n<p><a href=\"https:\/\/www.nature.com\/articles\/s41591-020-1037-7\" target=\"_blank\" rel=\"noopener\">SPIRIT-AI &#8211; Guidelines for AI-Related Clinical Trial Protocols<\/a><\/p>\n<p><strong>Added in June 2025:<\/strong> SPIRIT-AI defines minimum protocol items for randomized controlled trials involving AI interventions. It specifies requirements for model description, input data, training\/validation details, <a title=\"ai risk management\" href=\"https:\/\/pacific.ai\/staging\/3667\/ai-risk-management-audit\/\">risk management<\/a>, and data monitoring to align with existing SPIRIT guidelines.<\/p>\n<p><a href=\"https:\/\/iris.who.int\/bitstream\/handle\/10665\/349093\/9789240038462-eng.pdf\" target=\"_blank\" rel=\"noopener\">WHO &#8211; Generating Evidence for AI-Based Medical Devices<\/a><\/p>\n<p><strong>Added in June 2025:<\/strong> This WHO framework provides a rigorous pathway for generating evidence on AI\/ML-enabled medical devices, guiding stakeholders through training data curation, validation study design, and post-market evaluation to satisfy regulatory and clinical requirements.<\/p>\n<p><a href=\"https:\/\/www.medrxiv.org\/content\/10.1101\/2024.12.30.24319785v1\" target=\"_blank\" rel=\"noopener\">HAIRA &#8211; Advancing Healthcare AI Governance: A Comprehensive Maturity Model<\/a><\/p>\n<p><strong>Added in June 2025:<\/strong> HAIRA defines a staged maturity model for AI governance in healthcare organizations, assessing domains such as policy alignment, transparency, stakeholder engagement, and risk management. By offering a self-assessment tool, HAIRA helps institutions benchmark and improve their AI governance practices over time.<\/p>\n<p><a href=\"https:\/\/www.fda.gov\/media\/182871\/download\" target=\"_blank\" rel=\"noopener\">TPLC &#8211; Total Product Lifecycle Framework for Healthcare AI\/ML<\/a><\/p>\n<p><strong>Added in June 2025:<\/strong> TPLC adapts the FDA\u2019s Total Product Lifecycle concept to AI\/ML-enabled healthcare solutions. It emphasizes continuous monitoring, real-world performance tracking, and iterative updates across pre-market, deployment, and post-market phases, ensuring AI systems remain safe, effective, and equitable.<\/p>\n<p><a href=\"https:\/\/ai.nejm.org\/doi\/abs\/10.1056\/AIcs2300269\" target=\"_blank\" rel=\"noopener\">OPTICA &#8211; Organizational Perspective Checklist for AI Solutions Adoption<\/a><\/p>\n<p><strong>Added in June 2025:<\/strong> OPTICA is a practical checklist designed for health systems evaluating AI solutions. It covers organizational readiness (IT infrastructure, workflow integration), clinician training, <a title=\"2025 AI Governance Survey\" href=\"https:\/\/pacific.ai\/staging\/3667\/2025-ai-governance-survey-reveals-critical-gaps-between-ai-ambition-and-operational-readiness\/\">data<\/a> governance, and outcome measurement, helping healthcare organizations systematically assess non-technical factors critical to successful AI deployment.<\/p>\n<p><a href=\"https:\/\/academic.oup.com\/jamia\/article\/30\/9\/1503\/7174318\" target=\"_blank\" rel=\"noopener\">SALIENT &#8211; End-to-End Clinical AI Implementation Framework<\/a><\/p>\n<p><strong>Added in June 2025:<\/strong> SALIENT (Systematic Approaches to Learning, <a title=\"ai governance implementation\" href=\"https:\/\/pacific.ai\/staging\/3667\/ai-governance-implementation\/\">Implementation<\/a>, Evaluation, and Translation) offers guidance for every stage of clinical AI, from early feasibility studies to large-scale rollouts. It emphasizes iterative testing, stakeholder feedback loops, and ongoing governance to ensure AI tools deliver clinical value and remain aligned with patient safety.<\/p>\n<p><a href=\"https:\/\/jamanetwork.com\/journals\/jamanetworkopen\/fullarticle\/2812958\" target=\"_blank\" rel=\"noopener\">AHRQ &amp; AIMHD Guiding Principles to Address Algorithm Bias<\/a><\/p>\n<p><strong>Added in June 2025:<\/strong> Developed collaboratively by the Agency for Healthcare Research and Quality (AHRQ) and the Alliance for Integrated Monitoring of Healthcare Disparities (AIMHD), these principles offer concrete steps to identify, measure, and mitigate bias in AI-driven risk prediction and decision support tools, with particular attention to historically marginalized populations.<\/p>\n<p><a href=\"https:\/\/dihi.org\/model-facts-v2-label-for-hti-1-compliance\/\" target=\"_blank\" rel=\"noopener\">\u2018Model Facts\u2019 Label for HTI-1 Compliance by Duke Institute for Health Innovation<\/a><\/p>\n<p><strong>Added in June 2025:<\/strong> The \u201cModel Facts\u201d label is a standardized one-page document that summarizes key information about AI models\u2014including intended use, population characteristics, performance metrics, and known limitations\u2014to satisfy HHS HTI-1 algorithm transparency requirements and facilitate clinician and patient trust.<\/p>\n<h2>2. New US National Legislation<\/h2>\n<p><a href=\"https:\/\/www.congress.gov\/bill\/119th-congress\/senate-bill\/146\" target=\"_blank\" rel=\"noopener\">The Take it Down Act (S.146)<\/a><\/p>\n<p><strong>Added in June 2025:<\/strong> This federal law requires technology platforms to promptly remove certain forms of nonconsensual intimate images and content categorized as \u201crevenge porn.\u201d It mandates takedown procedures, notice to affected individuals, and periodic reporting to Congress on compliance. Although not AI-specific, its removal obligations intersect with AI tools that generate, moderate, or distribute visual content.<\/p>\n<h2>3. New US Federal Regulation<\/h2>\n<p><a href=\"https:\/\/www.whitehouse.gov\/wp-content\/uploads\/2025\/02\/M-25-21-Accelerating-Federal-Use-of-AI-through-Innovation-Governance-and-Public-Trust.pdf\" target=\"_blank\" rel=\"noopener\">Memorandum on Accelerating Federal Use of AI <\/a><\/p>\n<p><strong>Added in June 2025:<\/strong> This document, issued by the White House, directs all Executive Branch agencies to (a) inventory their existing AI initiatives, (b) establish AI innovation hubs, and (c) adopt common federal AI governance standards\u2014emphasizing transparency, equity, and security. It also outlines funding mechanisms for AI R&amp;D within government.<\/p>\n<p><a href=\"https:\/\/www.whitehouse.gov\/wp-content\/uploads\/2025\/02\/M-25-22-Driving-Efficient-Acquisition-of-Artificial-Intelligence-in-Government.pdf\" target=\"_blank\" rel=\"noopener\">Memorandum on Driving Efficient Acquisition of AI in Government <\/a><\/p>\n<p><strong>Added in June 2025:<\/strong> This memorandum provides agencies with procurement guidance to streamline purchasing of AI\/ML products and services. It encourages the use of shared services, modular licensing, and pre-negotiated contract vehicles to avoid vendor lock-in and ensure continuous security and compliance monitoring.<\/p>\n<h2>4. Newly Covered Acceptable Use Policies by Major Providers<\/h2>\n<p><a href=\"https:\/\/openai.com\/policies\/usage-policies\/\" target=\"_blank\" rel=\"noopener\">OpenAI Usage Policy<\/a><\/p>\n<p>Defines prohibited content (e.g., hate speech, illicit behavior, disallowed political campaigning), outlines user obligations around data retention and privacy, and details mechanisms for reporting misuse.<\/p>\n<p><a href=\"https:\/\/www.anthropic.com\/legal\/aup\" target=\"_blank\" rel=\"noopener\">Anthropic Usage Policy<\/a><\/p>\n<p>Specifies categories of disallowed content (e.g., personal data extraction, weaponization, extreme political persuasion), along with requirements for developers around usage monitoring and red teaming.<\/p>\n<p><a href=\"https:\/\/learn.microsoft.com\/en-us\/legal\/ai-code-of-conduct\" target=\"_blank\" rel=\"noopener\">Microsoft Enterprise AI Services Code of Conduct<\/a><\/p>\n<p>Outlines acceptable uses of Azure OpenAI Service, including prohibitions on illegal, infringing, or malicious applications; mandates adherence to Microsoft\u2019s Responsible AI Standard for fairness, reliability, safety, and privacy.<\/p>\n<p><a href=\"https:\/\/aws.amazon.com\/ai\/responsible-ai\/policy\/\" target=\"_blank\" rel=\"noopener\">AWS Responsible AI Policy<\/a><\/p>\n<p>Details AWS\u2019s terms for AI\/ML services (e.g., SageMaker), including data classification requirements, customer obligations to secure training data, and a list of disallowed practices (deepfakes, malware generation, surveillance use cases).<\/p>\n<p><a href=\"https:\/\/policies.google.com\/terms\/generative-ai\/use-policy\" target=\"_blank\" rel=\"noopener\">Google Generative AI Prohibited Use Policy<\/a><\/p>\n<p>Enumerates forbidden use cases for Google\u2019s Vertex AI and Generative AI Studio (e.g., impersonation, disallowed sexual content, targeted political microtargeting), and mandates that developers apply Google\u2019s AI Principles around fairness, privacy, and accountability.<\/p>\n<p><a href=\"https:\/\/ai.meta.com\/resources\/models-and-libraries\/seamless-use-policy\/\" target=\"_blank\" rel=\"noopener\">Meta Seamless Acceptable Use Policy<\/a><\/p>\n<p>Lays out Meta\u2019s rules for using Llama and other Meta AI models, including prohibitions on violence, child sexual exploitation content, harassment, hate speech, and disallowed data collection practices.<\/p>\n<p><a href=\"https:\/\/docs.cohere.com\/v2\/docs\/cohere-labs-acceptable-use-policy\" target=\"_blank\" rel=\"noopener\">Cohere Labs Acceptable Use Policy<\/a><\/p>\n<p>Defines Cohere\u2019s restrictions on content generation (e.g., plagiarism, hate, disinformation, defamation), requirements for <a title=\"Generative AI Data Privacy\" href=\"https:\/\/pacific.ai\/staging\/3667\/generative-ai-data-privacy-issues-challenges\/\">data privacy<\/a>, and guidelines for content moderation in user applications.<\/p>\n<h2>5. Updates to US State &amp; Local Legislation \u2013 Deepfake Laws<\/h2>\n<p><a href=\"https:\/\/nebraskalegislature.gov\/bills\/view_bill.php?DocumentID=59569\" target=\"_blank\" rel=\"noopener\">Nebraska LB 383 \u2013 Prohibition of Generated Child Sexual Abuse Material <\/a><\/p>\n<p><strong>Added in June 2025:<\/strong> Nebraska\u2019s new statute explicitly bans the creation, distribution, or possession of AI-generated child sexual abuse material (CSAM), aligning with federal legislation and other states\u2019 deepfake-related statutes. It imposes felony penalties and requires ISPs to remove flagged content within 48 hours.<\/p>\n<h2>6. Changes in US State &amp; Local Legislation \u2013 Privacy Laws<\/h2>\n<p>The June 2025 update harmonizes several privacy laws, but no net new statutes have been added beyond those included in March 2025. The key difference is that our privacy category\u2019s numbering and presentation have been updated; the same 16 state-level consumer privacy and AI-specific statutes (e.g., Utah AI Policy Act, Colorado SB 22-113) remain in scope.<\/p>\n<h2>7. Newly Covered International Standards<\/h2>\n<p><a href=\"https:\/\/www.oecd.org\/en\/publications\/oecd-framework-for-the-classification-of-ai-systems_cb6d9eca-en.html\" target=\"_blank\" rel=\"noopener\">OECD Framework for the Classification of AI Systems <\/a><\/p>\n<p><strong>Added in June 2025:<\/strong> The OECD published a standardized taxonomy for AI systems, classifying them by modality (e.g., perception, reasoning, interaction), domain, and risk level\u2014supporting cross-border regulatory alignment and data sharing for incident reporting.<\/p>\n<p><a href=\"https:\/\/www.oecd.org\/en\/publications\/towards-a-common-reporting-framework-for-ai-incidents_f326d4ac-en.html\" target=\"_blank\" rel=\"noopener\">OECD Common Reporting Framework for AI Incidents <\/a><\/p>\n<p><strong>Added in June 2025:<\/strong> This framework establishes a minimum dataset and ontology for reporting AI-related incidents (e.g., bias events, security breaches, safety failures) to national authorities, helping policymakers track trends and coordinate response.<\/p>\n<p><a href=\"https:\/\/cset.georgetown.edu\/wp-content\/uploads\/CSET-AI-Incidents.pdf\" target=\"_blank\" rel=\"noopener\">CSET\u2019s AI Incidents Key Components for a Mandatory Reporting Regime <\/a><\/p>\n<p><strong>Added in June 2025:<\/strong> The Center for Security and Emerging Technology (CSET) proposed a set of \u201ckey components\u201d (e.g., incident taxonomy, timelines, organizational context) that could underpin a legally mandated AI incident reporting regime\u2014informing U.S. congressional discussions.<\/p>\n<p><a href=\"https:\/\/www.unesco.org\/ethics-ai\/en\/eia\" target=\"_blank\" rel=\"noopener\">UNESCO\u2019s Ethical Impact Assessment <\/a><\/p>\n<p><strong>Added in June 2025:<\/strong> UNESCO released an Ethical Impact Assessment (EIA) toolkit that guides organizations through a structured evaluation of social, cultural, and environmental impacts of AI deployments. It complements existing ethical AI principles by providing actionable steps for risk identification and stakeholder engagement.<\/p>\n<h2>8. Summary of Operational Policy Additions<\/h2>\n<p>In addition to the expanded \u201cCovered Laws, Regulations, Frameworks &amp; Standards,\u201d the June 2025 release introduces two brand-new policies. These are standalone documents in the suite (with dedicated Table of Contents entries) to address emergent needs in incident management and acceptable use:<\/p>\n<h3>New: AI Incident Reporting Policy<\/h3>\n<ul>\n<li>Purpose: Defines internal and external AI incident reporting requirements.<\/li>\n<li>Scope: Covers all \u201cincidents\u201d and \u201cnear misses\u201d (e.g., bias events, safety failures, privacy breaches, misuse).<\/li>\n<li>Internal Reporting: Mandates that every team member report AI incidents to a centralized Incident Management System within 24 hours.<\/li>\n<li>External Reporting: Identifies when incidents must be escalated to regulators or affected stakeholders (e.g., HHS, FTC, state attorneys general).<\/li>\n<li>Incident Classification: Introduces a tiered classification scheme (e.g., \u201cLevel 1: negligible,\u201d \u201cLevel 2: moderate,\u201d \u201cLevel 3: severe or potential serious harm\u201d).<\/li>\n<li>Key Additions: Reporting near misses proactively, aligning with OECD\u2019s Common Reporting Framework, and incorporating CSET\u2019s incident key components.<\/li>\n<\/ul>\n<h3>New: AI Acceptable Use Policy<\/h3>\n<ul>\n<li>Purpose: Articulates conditions under which AI systems may be used or deployed.<\/li>\n<li>Scope: Applies to all employees, contractors, business partners, and customers who access or license Pacific AI\u2013produced AI systems.<\/li>\n<li>Unacceptable Uses:<\/li>\n<li><strong>1. Human Rights, Civil Liberties, and Safety:<\/strong><br \/>&#8211; Autonomous weapons, predictive policing, social scoring, invasive surveillance, stalking systems.<strong>2. Misinformation, Influence, and Deception:<\/strong><br \/>&#8211; Electoral manipulation, deepfake generation for deceptive political ads, coordinated disinformation campaigns.<strong>3. Data Privacy, Consent, and Security:<\/strong><br \/>&#8211; Unconsented biometric categorization, illicit data scraping, unauthorized profiling.<strong>4. Discrimination and Unfair Outcomes:<\/strong><br \/>&#8211; Use of AI to deny services based on protected characteristics, noncompliant credit scoring, algorithmic refusal of critical healthcare.<strong>5. Intellectual Property and Ethical Content Generation:<\/strong><br \/>&#8211; Automated content generation that infringes copyrights, plagiarizes, or promotes toxic or offensive speech.<strong>6. Safety and Misuse Prevention:<\/strong><br \/>&#8211; Systems designed to facilitate violence, create illicit weapons blueprints, generate CSAM.<\/li>\n<li>Enforcement &amp; Review: Describes disciplinary measures, automated tooling for policy enforcement (e.g., content filters), and quarterly policy reviews to incorporate new provider policies.<\/li>\n<\/ul>\n<h2>9. Minor Editorial Refinements<\/h2>\n<p>Aside from the additions above, some sections in the June 2025 release were renumbered or restructured for clarity:<\/p>\n<ul>\n<li>\u201cFrameworks and Standards\u201d and \u201cUS State &amp; Local Legislation \u2013 Privacy Laws\u201d were renumbered to accommodate the new \u201cAcceptable Use Policies\u201d section.<\/li>\n<li>Policy documents (Risk Management, System Lifecycle, Safety, Privacy, Fairness, Transparency) largely remain the same in scope, with minor editorial updates (e.g., clarifications on \u201cAI Governance Officer\u201d duties, updated reference links). Those internal policy changes do not introduce new legal or regulatory sources.<\/li>\n<\/ul>\n<h2>Next Steps &amp; Adoption Guidance<\/h2>\n<p>To fully leverage the enhanced Q2 2025 Policy Suite, organizations should:<\/p>\n<p><strong>1. Review New Frameworks &amp; Laws:<\/strong><br \/>&#8211; Assign subject-matter leads (e.g., clinical research, legal compliance, procurement teams) to evaluate how the new healthcare frameworks (e.g., SPIRIT-AI, HAIRA, AHRQ\/AIMHD bias principles) and laws (e.g., Take it Down Act, Nebraska LB 383) affect existing processes or require updates.<\/p>\n<p><strong>2. Incorporate Acceptable Use Policies:<\/strong><br \/>&#8211; Update contracts, SLAs, or terms of service to reflect prohibited use cases mandated by leading AI providers. Ensure developers and end users are aware of new restrictions on content generation, data handling, and model deployment.<\/p>\n<p><strong>3. Establish Incident Reporting Workflows:<\/strong><br \/>&#8211; Build or enhance incident management systems to capture \u201cnear misses\u201d and incidents per the new AI Incident Reporting Policy. Define roles and responsibilities, ensuring timely escalation to compliance, legal, and\u2014if needed\u2014regulatory bodies.<\/p>\n<p><strong>4. Communicate &amp; Train:<\/strong><br \/>&#8211; Update internal training materials to include the latest additions (e.g., \u201cModel Facts\u201d labeling, equity-focused AHRQ\/AIMHD principles). Host workshops for AI governance teams to review the new maturity-model (HAIRA) and lifecycle considerations (TPLC, SALIENT).<\/p>\n<p><strong>5. Self-Attest &amp; Certify:<\/strong><br \/>&#8211; Once the updates are adopted, organizations may contact Pacific AI (<a href=\"mailto:info@pacific.ai\">info@pacific.ai<\/a>) with written confirmation of compliance to receive an updated \u201cAI Governance Badge\u201d reflecting Q2 2025 coverage.<\/p>\n<h2>FAQ<\/h2>\n<p><strong>What new healthcare frameworks does the Q2 2025 update include?<\/strong><\/p>\n<p>It adds several leading frameworks: TRIPOD\u2011AI, SPIRIT\u2011AI, WHO\u2019s evidence guidelines for AI-based devices, HAIRA, TPLC, OPTICA, SALIENT, AHRQ &amp; AIMHD bias principles, and a \u201cModel Facts\u201d label for HTI\u20111 transparency.<\/p>\n<p><strong>Which new U.S. legislation and regulations are covered in this release?<\/strong><\/p>\n<p>The suite now covers the Take it Down Act (S.146) on nonconsensual intimate images and two White House memos\u2014on accelerating federal AI use and efficient AI procurement\u2014issued in June 2025.<\/p>\n<p><strong>What provider acceptable use policies were newly included?<\/strong><\/p>\n<p>It adds six updated policies: OpenAI, Anthropic, Microsoft Enterprise AI, AWS Responsible AI, Google Generative AI, Meta Seamless, and Cohere Labs acceptable-use guidelines.<\/p>\n<p><strong>What operational policies were introduced in this release?<\/strong><\/p>\n<p>Two new policies were launched: an AI Incident Reporting Policy (covering incident types, internal\/external reporting timelines) and a standalone AI Acceptable Use Policy dedicated to internal governance.<\/p>\n\n\n<script type=\"application\/ld+json\">\n{\n  \"@context\": \"https:\/\/schema.org\",\n  \"@type\": \"FAQPage\",\n  \"mainEntity\": [\n    {\n      \"@type\": \"Question\",\n      \"name\": \"What new healthcare frameworks does the Q2 2025 update include?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"It adds several leading frameworks: TRIPOD-AI, SPIRIT-AI, WHO\u2019s evidence guidelines for AI-based devices, HAIRA, TPLC, OPTICA, SALIENT, AHRQ & AIMHD bias principles, and a \u201cModel Facts\u201d label for HTI-1 transparency.\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"Which new U.S. legislation and regulations are covered in this release?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"The suite now covers the Take it Down Act (S.146) on nonconsensual intimate images and two White House memos\u2014on accelerating federal AI use and efficient AI procurement\u2014issued in June 2025.\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"What provider acceptable use policies were newly included?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"It adds six updated policies: OpenAI, Anthropic, Microsoft Enterprise AI, AWS Responsible AI, Google Generative AI, Meta Seamless, and Cohere Labs acceptable-use guidelines.\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"What operational policies were introduced in this release?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Two new policies were launched: an AI Incident Reporting Policy (covering incident types, internal\/external reporting timelines) and a standalone AI Acceptable Use Policy dedicated to internal governance.\"\n      }\n    }\n  ]\n}\n<\/script>\n","protected":false},"excerpt":{"rendered":"<p>June 2025 In our March 2025 release (\u201c2025-A\u201d), we consolidated a broad set of laws, regulations, frameworks, and standards into a unified policy suite to guide organizations in developing and deploying AI systems responsibly. With our Q2 2025 update (\u201c2025-B\u201d), we have expanded coverage to include newly enacted legislation, additional healthcare-specific guidance, and several operational [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":981,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"nf_dc_page":"","content-type":"","inline_featured_image":false,"footnotes":""},"categories":[118],"tags":[],"class_list":["post-975","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-articles"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Pacific AI Governance Policy Suite: Q2 2025 Release Notes - Pacific AI<\/title>\n<meta name=\"description\" content=\"Pacific AI\u2019s Q2 2025 Governance Policy Suite introduces updates featuring new tools and refinements for responsible, transparent, and compliant AI deployment\" \/>\n<meta name=\"robots\" content=\"noindex, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Pacific AI Governance Policy Suite: Q2 2025 Release Notes - Pacific AI\" \/>\n<meta property=\"og:description\" content=\"Pacific AI\u2019s Q2 2025 Governance Policy Suite introduces updates featuring new tools and refinements for responsible, transparent, and compliant AI deployment\" \/>\n<meta property=\"og:url\" content=\"https:\/\/pacific.ai\/pacific-ai-governance-policy-suite-q2-2025-release-notes\/\" \/>\n<meta property=\"og:site_name\" content=\"Pacific AI\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/people\/Pacific-AI\/61566807347567\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-06-03T16:37:18+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-03-12T14:14:19+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/pacific.ai\/wp-content\/uploads\/2025\/06\/policy.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"550\" \/>\n\t<meta property=\"og:image:height\" content=\"440\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"David Talby\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"David Talby\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"10 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/pacific-ai-governance-policy-suite-q2-2025-release-notes\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/pacific-ai-governance-policy-suite-q2-2025-release-notes\\\/\"},\"author\":{\"name\":\"David Talby\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#\\\/schema\\\/person\\\/8a2b4d5d75c8752d83ae6bb1d44e0186\"},\"headline\":\"Pacific AI Governance Policy Suite: Q2 2025 Release Notes\",\"datePublished\":\"2025-06-03T16:37:18+00:00\",\"dateModified\":\"2026-03-12T14:14:19+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/pacific-ai-governance-policy-suite-q2-2025-release-notes\\\/\"},\"wordCount\":2120,\"publisher\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/pacific-ai-governance-policy-suite-q2-2025-release-notes\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/06\\\/policy.webp\",\"articleSection\":[\"Articles\"],\"inLanguage\":\"en\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/pacific-ai-governance-policy-suite-q2-2025-release-notes\\\/\",\"url\":\"https:\\\/\\\/pacific.ai\\\/pacific-ai-governance-policy-suite-q2-2025-release-notes\\\/\",\"name\":\"Pacific AI Governance Policy Suite: Q2 2025 Release Notes - Pacific AI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/pacific-ai-governance-policy-suite-q2-2025-release-notes\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/pacific-ai-governance-policy-suite-q2-2025-release-notes\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/06\\\/policy.webp\",\"datePublished\":\"2025-06-03T16:37:18+00:00\",\"dateModified\":\"2026-03-12T14:14:19+00:00\",\"description\":\"Pacific AI\u2019s Q2 2025 Governance Policy Suite introduces updates featuring new tools and refinements for responsible, transparent, and compliant AI deployment\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/pacific-ai-governance-policy-suite-q2-2025-release-notes\\\/#breadcrumb\"},\"inLanguage\":\"en\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/pacific.ai\\\/pacific-ai-governance-policy-suite-q2-2025-release-notes\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/pacific-ai-governance-policy-suite-q2-2025-release-notes\\\/#primaryimage\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/06\\\/policy.webp\",\"contentUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/06\\\/policy.webp\",\"width\":550,\"height\":440,\"caption\":\"Pacific AI Governance Policy Suite Q2 2025 release notes visual showing compliant AI policies, regulatory checks, and healthcare AI governance icons representing responsible AI compliance and oversight.\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/pacific-ai-governance-policy-suite-q2-2025-release-notes\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/pacific.ai\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Pacific AI Governance Policy Suite: Q2 2025 Release Notes\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#website\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/\",\"name\":\"Pacific AI\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#organization\",\"name\":\"Pacific AI\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/06\\\/site_logo.svg\",\"contentUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/06\\\/site_logo.svg\",\"width\":182,\"height\":41,\"caption\":\"Pacific AI\"},\"image\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/people\\\/Pacific-AI\\\/61566807347567\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/pacific-ai\\\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#\\\/schema\\\/person\\\/8a2b4d5d75c8752d83ae6bb1d44e0186\",\"name\":\"David Talby\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/03\\\/David_portret-96x96.webp\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/03\\\/David_portret-96x96.webp\",\"contentUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/03\\\/David_portret-96x96.webp\",\"caption\":\"David Talby\"},\"description\":\"David Talby is a CTO at Pacific AI, helping healthcare &amp; life science companies put AI to good use. David is the creator of Spark NLP \u2013 the world\u2019s most widely used natural language processing library in the enterprise. He has extensive experience building and running web-scale software platforms and teams \u2013 in startups, for Microsoft\u2019s Bing in the US and Europe, and to scale Amazon\u2019s financial systems in Seattle and the UK. David holds a PhD in computer science and master\u2019s degrees in both computer science and business administration.\",\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/davidtalby\\\/\"],\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/author\\\/david\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Pacific AI Governance Policy Suite: Q2 2025 Release Notes - Pacific AI","description":"Pacific AI\u2019s Q2 2025 Governance Policy Suite introduces updates featuring new tools and refinements for responsible, transparent, and compliant AI deployment","robots":{"index":"noindex","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"og_locale":"en_US","og_type":"article","og_title":"Pacific AI Governance Policy Suite: Q2 2025 Release Notes - Pacific AI","og_description":"Pacific AI\u2019s Q2 2025 Governance Policy Suite introduces updates featuring new tools and refinements for responsible, transparent, and compliant AI deployment","og_url":"https:\/\/pacific.ai\/pacific-ai-governance-policy-suite-q2-2025-release-notes\/","og_site_name":"Pacific AI","article_publisher":"https:\/\/www.facebook.com\/people\/Pacific-AI\/61566807347567\/","article_published_time":"2025-06-03T16:37:18+00:00","article_modified_time":"2026-03-12T14:14:19+00:00","og_image":[{"width":550,"height":440,"url":"https:\/\/pacific.ai\/wp-content\/uploads\/2025\/06\/policy.webp","type":"image\/webp"}],"author":"David Talby","twitter_card":"summary_large_image","twitter_misc":{"Written by":"David Talby","Est. reading time":"10 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/pacific.ai\/pacific-ai-governance-policy-suite-q2-2025-release-notes\/#article","isPartOf":{"@id":"https:\/\/pacific.ai\/pacific-ai-governance-policy-suite-q2-2025-release-notes\/"},"author":{"name":"David Talby","@id":"https:\/\/pacific.ai\/staging\/3667\/#\/schema\/person\/8a2b4d5d75c8752d83ae6bb1d44e0186"},"headline":"Pacific AI Governance Policy Suite: Q2 2025 Release Notes","datePublished":"2025-06-03T16:37:18+00:00","dateModified":"2026-03-12T14:14:19+00:00","mainEntityOfPage":{"@id":"https:\/\/pacific.ai\/pacific-ai-governance-policy-suite-q2-2025-release-notes\/"},"wordCount":2120,"publisher":{"@id":"https:\/\/pacific.ai\/staging\/3667\/#organization"},"image":{"@id":"https:\/\/pacific.ai\/pacific-ai-governance-policy-suite-q2-2025-release-notes\/#primaryimage"},"thumbnailUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/06\/policy.webp","articleSection":["Articles"],"inLanguage":"en"},{"@type":"WebPage","@id":"https:\/\/pacific.ai\/pacific-ai-governance-policy-suite-q2-2025-release-notes\/","url":"https:\/\/pacific.ai\/pacific-ai-governance-policy-suite-q2-2025-release-notes\/","name":"Pacific AI Governance Policy Suite: Q2 2025 Release Notes - Pacific AI","isPartOf":{"@id":"https:\/\/pacific.ai\/staging\/3667\/#website"},"primaryImageOfPage":{"@id":"https:\/\/pacific.ai\/pacific-ai-governance-policy-suite-q2-2025-release-notes\/#primaryimage"},"image":{"@id":"https:\/\/pacific.ai\/pacific-ai-governance-policy-suite-q2-2025-release-notes\/#primaryimage"},"thumbnailUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/06\/policy.webp","datePublished":"2025-06-03T16:37:18+00:00","dateModified":"2026-03-12T14:14:19+00:00","description":"Pacific AI\u2019s Q2 2025 Governance Policy Suite introduces updates featuring new tools and refinements for responsible, transparent, and compliant AI deployment","breadcrumb":{"@id":"https:\/\/pacific.ai\/pacific-ai-governance-policy-suite-q2-2025-release-notes\/#breadcrumb"},"inLanguage":"en","potentialAction":[{"@type":"ReadAction","target":["https:\/\/pacific.ai\/pacific-ai-governance-policy-suite-q2-2025-release-notes\/"]}]},{"@type":"ImageObject","inLanguage":"en","@id":"https:\/\/pacific.ai\/pacific-ai-governance-policy-suite-q2-2025-release-notes\/#primaryimage","url":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/06\/policy.webp","contentUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/06\/policy.webp","width":550,"height":440,"caption":"Pacific AI Governance Policy Suite Q2 2025 release notes visual showing compliant AI policies, regulatory checks, and healthcare AI governance icons representing responsible AI compliance and oversight."},{"@type":"BreadcrumbList","@id":"https:\/\/pacific.ai\/pacific-ai-governance-policy-suite-q2-2025-release-notes\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/pacific.ai\/"},{"@type":"ListItem","position":2,"name":"Pacific AI Governance Policy Suite: Q2 2025 Release Notes"}]},{"@type":"WebSite","@id":"https:\/\/pacific.ai\/staging\/3667\/#website","url":"https:\/\/pacific.ai\/staging\/3667\/","name":"Pacific AI","description":"","publisher":{"@id":"https:\/\/pacific.ai\/staging\/3667\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/pacific.ai\/staging\/3667\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en"},{"@type":"Organization","@id":"https:\/\/pacific.ai\/staging\/3667\/#organization","name":"Pacific AI","url":"https:\/\/pacific.ai\/staging\/3667\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/pacific.ai\/staging\/3667\/#\/schema\/logo\/image\/","url":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/06\/site_logo.svg","contentUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/06\/site_logo.svg","width":182,"height":41,"caption":"Pacific AI"},"image":{"@id":"https:\/\/pacific.ai\/staging\/3667\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/people\/Pacific-AI\/61566807347567\/","https:\/\/www.linkedin.com\/company\/pacific-ai\/"]},{"@type":"Person","@id":"https:\/\/pacific.ai\/staging\/3667\/#\/schema\/person\/8a2b4d5d75c8752d83ae6bb1d44e0186","name":"David Talby","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/03\/David_portret-96x96.webp","url":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/03\/David_portret-96x96.webp","contentUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/03\/David_portret-96x96.webp","caption":"David Talby"},"description":"David Talby is a CTO at Pacific AI, helping healthcare &amp; life science companies put AI to good use. David is the creator of Spark NLP \u2013 the world\u2019s most widely used natural language processing library in the enterprise. He has extensive experience building and running web-scale software platforms and teams \u2013 in startups, for Microsoft\u2019s Bing in the US and Europe, and to scale Amazon\u2019s financial systems in Seattle and the UK. David holds a PhD in computer science and master\u2019s degrees in both computer science and business administration.","sameAs":["https:\/\/www.linkedin.com\/in\/davidtalby\/"],"url":"https:\/\/pacific.ai\/staging\/3667\/author\/david\/"}]}},"_links":{"self":[{"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/posts\/975","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/comments?post=975"}],"version-history":[{"count":17,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/posts\/975\/revisions"}],"predecessor-version":[{"id":2283,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/posts\/975\/revisions\/2283"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/media\/981"}],"wp:attachment":[{"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/media?parent=975"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/categories?post=975"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/tags?post=975"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}