The State of AI Governance

This webinar presents key findings from the 2025 AI Governance Survey, conducted in April – May of 2025 by Gradient Flow to assess the priorities, practices, and concerns of professionals and technology leaders in this space. Topics covered:

  • Stages of adoption by AI developers and deployers
  • Adoption of formal AI Governance policies and roles
  • Implementation of processes for AI literacy training and incident response
  • Regulatory frameworks that are studied or adopted
  • Implementation of best practices and what drive prioritization
  • Use of tools such as red teaming, bias mitigation, and model cards ial, and reputation risks.

FAQ

Who participated in the “State of AI Governance” webinar and what key topics were discussed?

Pacific AI and Gradient Flow co-hosted this June 18, 2025 webinar featuring Ben Lorica and David Talby. The session reviewed the 2025 AI Governance Survey, covering adoption stages, formal policies and roles, AI‑literacy training, incident response, red‑teaming, bias mitigation, and regulatory frameworks.

What percentage of organizations now have dedicated AI governance roles?

According to the survey, 59% of participating organizations—and 61% of technical-led teams—have established a formal AI governance role or office.

How prevalent is AI safety training across organizations?

Approximately 65% of organizations conduct annual AI‑safety or literacy training. This varies by size: 79% in mid-sized firms, 59% in large organizations, and 41% in smaller ones.

What governance practices are most commonly deployed by organizations?

Most surveyed organizations have begun implementing red‑teaming, bias mitigation, and model documentation processes—indicating increasing maturity across governance activities.

Why is formalizing AI governance seen as critical by these organizations?

Formal governance roles and processes enable responsible AI deployment by providing oversight, building trust, managing risks, and aligning practices with emerging regulations—making it foundational rather than optional.

Reliable and verified information compiled by our editorial and professional team. Pacific AI Editorial Policy.

About the speakers
David Talby
CEO, Pacific AI

David Talby is a CEO at Pacific AI and John Snow Labs, helping healthcare & life science companies put AI to good use. He has extensive experience building and running web-scale software platforms and teams – in startups, for Microsoft’s Bing in the US and Europe, and to scale Amazon’s financial systems in Seattle and the UK.

David holds a PhD in computer science and master’s degrees in both computer science and business administration.

Ben Lorica
Principal, Gradient Flow

Ben Lorica is founder at Gradient Flow. He is a highly respected data scientist, having served leading roles at O’Reilly Media (Chief Data Scientist, Program Chair of the Strata Data Conference, O’Reilly Artificial Intelligence Conference, and TensorFlow World), at Databricks, and as an advisor to startups.

He serves as co-chair for several leading industry conferences: the AI Conference, the NLP Summit, the Data+AI Summit, Ray Summit, and K1st World. He is the host of the Data Exchange podcast and edits the Gradient Flow newsletter.

Testing for Bias of Large Language Models in Clinical Applications

FAQ How is bias measured in clinical LLMs? Bias is evaluated using clinical vignettes and "counterfactual" variations (e.g., changing patient attributes) to observe differential responses, allowing detection of both performance...