AI laws and regulations are accelerating worldwide, and the United States, still the principal builder and deployer of advanced AI systems, remains a driving force in shaping regulatory approaches. From deepfake controls and healthcare-specific safeguards to companion-bot rules, no dimension of AI oversight escapes our attention.
Unlike the U.S. and EU, many jurisdictions continue to favor technology-neutral approaches. Australia recently reaffirmed this direction in its National AI Plan, committing to robust legal, regulatory, and ethical frameworks without adopting an EU-style AI Act. Instead, it will strengthen consumer protection, online safety, copyright, healthcare, privacy, and employment laws using best-practice principles to address AI risks more flexibly.
As we track global developments, we see a steady rise in new acts, guidelines, and strategic approaches across regions. Yet amid this diversity, a single, well-structured governance document can help anchor your corporate strategy ensuring confidence in navigating emerging challenges.
So, let’s summarize the latest updates in the AI Governance Policy Suite.
Key Updates in the Q4 2025 Release:
1. New ISO/IEC 42005:2025 AI system impact assessment framework
ISO/IEC 42005:2025 AI system impact assessment framework is a new international guidance standard for AI System Impact Assessments (AIA), helping organizations systematically evaluate AI’s potential effects on people and society throughout the AI lifecycle to ensure responsible, transparent, and trustworthy AI development and deployment, aligning with regulations like the EU AI Act and complementing ISO/IEC 42001 for AI Management Systems.
Below is a control-by-control mapping of ISO 42005 requirements to the Pacific AI Governance Policy Suite. Each entry includes:
- A brief description of the ISO control
- The Pacific AI policy that addresses it
- The specific clause that fulfills the requirement
| ISO 42005 Control Description | Pacific AI Policy | Clause |
|---|---|---|
| Documenting the process | AI Risk Management Policy | §4,5, 10 |
| Integration with organisational management | AI Risk Management Policy | §4 |
| Timing and triggers for assessment | AI Risk Management Policy | §4,7 |
| Defining scope | AI Risk Management Policy | §4,5 |
| Roles and responsibilities | AI Risk Management PolicyAI System Lifecycle Policy | §3, 4§3 |
| Thresholds and impact scales | AI Risk Management PolicyAI System Lifecycle Policy | §5§4,5 |
| Performing the assessment | AI Risk Management Policy | §4, 5 |
| Analysing results | AI Risk Management PolicyAI System Lifecycle Policy | §5§4,5 |
| Recording and reporting | AI Risk Management Policy | §4,5, 10 |
| Approval and decision process | AI Risk Management Policy | §5 |
| Monitoring and review | AI Risk Management Policy | §6, 7, 9 |
2. Health Care AI Code of Conduct by National Academy of Medicine
Health Care AI Code of Conduct is a unified framework for the development and application of AI in health, health care, and biomedical science. The Code defines six high-level obligations for organizations using or building health-care AI. These commitments are:
- Advance Humanity — ensure AI aligns with societal and cultural goals for health; promote independent evaluation.
- Ensure Equity — use standardized metrics to assess and report bias in data, outputs, or AI use.
- Engage Impacted Individuals — include all stakeholders (patients, communities, clinicians, developers) throughout the AI lifecycle in governance, design, use.
- Improve Workforce Well-Being — ensure that introduction of AI supports staff, invests in training, maintains positive working conditions.
- Monitor Performance — apply standardized quality and safety metrics to assess AI’s effect on health outcomes.
- Innovate and Learn — support a national health-AI research agenda, encourage shared learning across stakeholders, and build capacity for ongoing improvement.
3. Adding Californian Acts
California has once again proven itself at the forefront of responsible AI regulation. On September 29, 2025, California Governor Gavin Newsom signed into law the Transparency in Frontier Artificial Intelligence Act. This new legislation enables strict transparency measures and safety rules from large frontier developers of large AI models (frontier models). The Transparency in Frontier Artificial Intelligence Act requires major AI developers to make public disclosures about safety protocols and for them to report safety incidents. The new legislation creates whistleblower protection and makes cloud computing available for smaller developers and researchers. As mentioned, the current law targets very large models and large developers. Nevertheless, the preamble of the law mentions that additional legislation may be needed for regulation of foundation models developed by smaller companies or that are behind the frontier may pose significant catastrophic risk.
California becomes first state to regulate AI companion chatbots. Companion AI Regulation SB 243 targets AI companionship/chatbot services, imposing disclosure, safety, and accountability requirements, especially to protect minors and vulnerable users. If the operator knows the user is a minor, additional safeguards apply: disclosure that the user is interacting with AI; reminder notices every three hours during prolonged sessions; and a warning that the chatbot may be inappropriate or unsuitable for minors. Operators must adopt “safety protocols”: preventing the generation of content promoting suicide, self-harm, or other harmful content; and providing ways to refer users to crisis hotlines or help services if needed.
California AB 2885 establishes a unified legal definition of “artificial intelligence” across various California laws. Specifically, it defines AI as “an engineered or machine-based system that varies in its level of autonomy and that can infer from the input it receives how to generate outputs that can influence physical or virtual environments.” AB 2885 is foundational as it harmonizes what AI legally means in California, so that subsequent AI laws and policies refer to the same concept.
4. Colorado AI Act
The Colorado AI Act represents the first comprehensive U.S. law to regulate “high-risk” AI systems and aims to protect consumers from algorithmic discrimination.
The law seeks to prevent discriminatory or unfair outcomes when AI affects major life decisions (jobs, housing, health care, loans, etc.), protecting individuals from algorithmic bias or opaque automated decision-making. By forcing developers and deployers to disclose meaningful information on what the AI does, how it’s trained, when it’s used, how decisions are made, the law counters “black-box” AI systems and gives people a chance to understand, challenge, or opt out from automated decisions.
The Colorado AI Act offers a model for other states and possibly federal regulation.
5. Adding various acts on deepfakes across the US
We’ve expanded our coverage to include several new U.S. laws regulating deepfakes, such as Arizona HB 2394 and SB 1359, Arkansas Act 977, California AB 2655, the new Tennessee ELVIS Act, and Washington HB 1205. We also added two important healthcare-related deepfake regulations: California AB 489 and Illinois HB 1806. These additions strengthen monitoring of synthetic media risks across both general and healthcare-specific contexts.
6. Contractual Clauses Checklists
We updated the Suite with high-level contracting considerations to support legal practitioners when negotiating agreements with both vendors and customers.
These principles help ensure that AI-related responsibilities, safeguards, and rights are properly addressed.
Key elements addressed in the Suite:
- AI Governance Compliance Responsibilities
- Regulatory & Legal Commitments
- IP Transparency
- AI Risk Management
- Rules governing data use, sharing and deletion
- AI Incident Notification
- Rights to Data & Outputs
- Transparency & Oversight
- AI Acceptable Use Compliance
7. Next Steps & Adoption Guidance
To fully leverage the enhanced Q4 2025 Policy Suite, organizations should:
- Review New Frameworks & Laws:
Assign subject-matter leads (e.g., clinical research, legal compliance, procurement teams) to evaluate how the new US natioinal, fededal and local laws and regulations. - Review laws across major jurisdictions:
Create cross-functional oversight for AI laws in target markets. - Implementing technical and organization measures:
Incorporating AI Governance Policy Suite is not enough. Adoption of the Policy Suite alone does not constitute compliance with any applicable law, regulation, or industry standard. Compliance requires a company to implement, maintain, and continuously monitor operational, technical, and organizational measures. - Stay Compliant:
Incorporate all recently suggested improvements to your AI Governance Policy Suite. - Communicate & Train:
Update internal training materials to include the latest additions and host workshops for AI governance teams. - Self-Attest & Certify:
Once the updates are adopted, organizations may contact Pacific AI at [email protected]. We will quide on you for you can obtain a written confirmation of compliance to receive an updated “AI Governance Badge” reflecting Q4 2025 coverage.


