This webinar presents key findings from the 2025 AI Governance Survey, conducted in April – May of 2025 by Gradient Flow to assess the priorities, practices, and concerns of professionals and technology leaders in this space. Topics covered:
- Stages of adoption by AI developers and deployers
- Adoption of formal AI Governance policies and roles
- Implementation of processes for AI literacy training and incident response
- Regulatory frameworks that are studied or adopted
- Implementation of best practices and what drive prioritization
- Use of tools such as red teaming, bias mitigation, and model cards ial, and reputation risks.
FAQ
Who participated in the “State of AI Governance” webinar and what key topics were discussed?
Pacific AI and Gradient Flow co-hosted this June 18, 2025 webinar featuring Ben Lorica and David Talby. The session reviewed the 2025 AI Governance Survey, covering adoption stages, formal policies and roles, AI‑literacy training, incident response, red‑teaming, bias mitigation, and regulatory frameworks.
What percentage of organizations now have dedicated AI governance roles?
According to the survey, 59% of participating organizations—and 61% of technical-led teams—have established a formal AI governance role or office.
How prevalent is AI safety training across organizations?
Approximately 65% of organizations conduct annual AI‑safety or literacy training. This varies by size: 79% in mid-sized firms, 59% in large organizations, and 41% in smaller ones.
What governance practices are most commonly deployed by organizations?
Most surveyed organizations have begun implementing red‑teaming, bias mitigation, and model documentation processes—indicating increasing maturity across governance activities.
Why is formalizing AI governance seen as critical by these organizations?
Formal governance roles and processes enable responsible AI deployment by providing oversight, building trust, managing risks, and aligning practices with emerging regulations—making it foundational rather than optional.



