2025 AI Governance Survey

    As artificial intelligence systems become increasingly integral to business operations and societal functions, establishing effective AI governance is paramount for managing risks, ensuring ethical deployment, and fostering responsible innovation. The 2025 AI Governance Survey was conducted online for 105 days, spanning from February 14 to May 29, 2025. This initiative gathered responses from 351 participants, recruited via the Gradient Flow Newsletter, Pacific AI Newsletter, collaborations with industry partners, and online advertising, to provide a comprehensive overview of current AI governance practices and challenges within organizations.

    Key findings

    • Production Reality Gap. Only 30% of organizations have deployed generative AI systems to production, with just 13% managing multiple deployments. Large enterprises are five times more likely than small firms to have multiple systems running (19% vs 4%).
    • Speed-to-Market Pressure. Pressure to ship quickly is the top governance barrier for 45% of all respondents, rising to 56% among Technical Leaders who face the most direct delivery demands.
    • Monitoring Blind Spots. Fewer than half (48%) of organizations monitor their production AI systems for accuracy, drift, and misuse. This critical practice plummets to just 9% among small companies.
    • Policy-Practice Disconnect. While 75% of organizations have AI usage policies, only 54% maintain incident response playbooks and 59% have dedicated governance roles revealing significant operational readiness gaps.
    • Technical Leader Ambition. Technical Leaders drive more aggressive adoption, with 48% targeting 3-5 new use cases versus 25% for other roles. They also show higher adoption of hybrid build-and-deploy strategies (45% vs 35% overall).
    • Small Company Vulnerability. Small companies consistently lag in governance maturity: only 36% have governance officers (vs 62-64% for larger firms), and just 41% provide annual AI training (vs 59-79%).
    • Regulatory Awareness Deficit. Familiarity with frameworks like NIST AI RMF remains concentrated in large enterprises. Small companies report 14% familiarity with most major standards, exposing compliance risk.
    • Conservative Deployment Outlook. 42% of organizations plan just one or two new generative AI use cases in the next year, with 73% of small firms staying in this cautious range.
    • Hybrid Strategy Dominance. 35% of companies both develop and deploy AI solutions, confirming successful teams combine build and buy approaches rather than choosing exclusively.
    • Incident Response Gaps. Many organizations lack protocols for AI-specific failure modes such as prompt injection attacks or biased outputs, indicating immature response capabilities beyond traditional IT playbooks.

    Survey Demographics and Segments

    The survey captured insights from a broad spectrum of professionals. A significant 43% of participants hold leadership positions – designated Technical Leaders (including VPs, CxOs, Directors, and Engineering Managers) – and their responses will be given particular attention throughout this report. The vast majority of respondents (91%) are from organizations with operations in the United States. Companies represented are categorized by size:

    • Large: 26% of respondents (organizations with over 5,000 employees)
    • Medium: 41% (organizations with 501 to 5,000 employees)
    • Small: 32% (organizations with 500 or fewer employees)

    The most common industries among participants were Computers, Electronics, and Technology (17%), Healthcare (14%), and Education (11%).

    Stage of Adoption

    Technical leaders demonstrate greater progress toward implementation than their colleagues in other roles. Nearly a quarter (23%) of technical leaders report having their first AI solution in production, compared to just 11% among non-technical roles. This gap narrows at the evaluation and experimentation stages, where both groups show similar engagement levels. Company size correlates strongly with adoption maturity: large enterprises are nearly five times more likely than small companies to have multiple AI solutions in production (19% versus 4%).

    The data suggests organizations are approaching generative AI with deliberate caution rather than rushing to deploy. The concentration of activity in experimental and evaluation phases – accounting for 61% of all respondents – indicates companies are taking time to identify appropriate applications and build capabilities before committing to production systems. Technical leaders’ higher implementation rates likely reflect their deeper understanding of the technology’s capabilities and limitations, positioning them to move more confidently from pilot to production.

    Given the survey’s emphasis on AI governance and generative AI adoption, subsequent analysis excludes respondents from organizations that report no active interest in generative AI solutions, concentrating instead on 316 respondents whose firms are engaged in assessment, implementation, or deployment phases.

    AI Adoption Landscape: Who’s Building and Who’s Using

    A plurality of organizations (35%) engage in both AI development and deployment, while 31% identify solely as AI deployers and 21% as AI developers. Notably, 13% of all respondents were unsure of their organization’s primary AI role. Technical leaders exhibited a stronger inclination towards a dual role, with 45% identifying as both developers and deployers, and a markedly lower proportion (4%) expressing uncertainty.

    The tendency to engage in both development and deployment was consistent across different company sizes, with approximately 36% of large, medium, and small organizations selecting this option. Identification as purely a developer or deployer also showed minimal variation by company size, although medium-sized companies (10%) reported less uncertainty regarding their AI classification compared to large (16%) and small (17%) firms.

    These findings suggest a significant overlap between AI development and deployment activities within contemporary organizations, especially those guided by clear technical leadership. This dual engagement likely reflects a strategy where companies build proprietary AI capabilities – mainly through post-training – while also integrating external or pre-existing AI solutions to address diverse operational requirements, rather than confining themselves to a single function in the AI value chain.

    Organizations appear measured in their generative AI deployment plans for the coming year. Among all respondents, 42% intend to implement just one or two use cases over the next 12 months, while 35% plan three to five deployments. Technical leaders show greater ambition than their counterparts, with 48% targeting three to five use cases compared to just 25% among other roles, and 22% planning more than five implementations versus 15% for non-technical leaders. Company size correlates strongly with deployment ambitions: 73% of small companies plan only one or two use cases, while 27% of large enterprises expect to launch more than five.

    This cautious approach likely reflects both practical constraints and strategic prudence. Technical leaders’ more aggressive targets may stem from their deeper understanding of AI capabilities, though this could also signal overconfidence relative to organizational readiness. The conservative stance among smaller firms suggests resource limitations and risk aversion are tempering deployment plans, while larger enterprises appear better positioned to pursue multiple concurrent initiatives. Overall, the data indicates most organizations are taking a deliberate, experimental approach rather than rushing toward wholesale AI transformation.

    Building Blocks: Policies and Leadership Structure

    A substantial majority of organizations, 75% of all respondents, report having established AI policies that delineate permissible and impermissible uses of the technology. This proportion is virtually unchanged among technical leaders, 74% of whom confirm the existence of such guidelines within their companies, indicating a widespread baseline for AI governance. The presence of AI policies correlates with organizational scale: 81% of medium-sized companies and 77% of large companies have them, while this figure drops to 55% for small companies.

    This disparity suggests that while the need for formal AI guidelines is broadly recognized, smaller entities may be slower to implement them, potentially due to resource constraints or a perception of lower immediate risk. The overall high adoption rate, however, points to a growing organizational imperative to manage AI activities through defined rules.

    The survey found that 59% of participating organizations have established a role or office tasked with AI governance. Among technical leaders, this figure rises slightly to 61%, suggesting a modest increase in the prevalence of such roles in organizations where technical leadership is prominent. The presence of a formal AI governance structure is strongly correlated with company size. While 64% of medium-sized and 62% of large companies have such roles or offices, only 36% of small companies do. This indicates that as organizations scale, the likelihood of establishing dedicated AI governance personnel or departments increases significantly.

    These findings suggest that the formalization of AI governance through dedicated roles is becoming a more common practice, particularly within larger enterprises. While these figures indicate a growing adoption, the binary nature of the question does not reveal the depth of establishment or resourcing for these governance roles or offices. This trend likely reflects the greater complexity of AI initiatives, increased regulatory scrutiny, and the capacity to allocate resources to specialized oversight functions in bigger firms, whereas smaller organizations may adopt such structures more gradually.

    AI Training and Regulatory Awareness

    Nearly two-thirds (65%) of organizations report conducting annual training for employees on the safe development and use of AI systems. This figure is slightly lower among technical leaders, with 62% confirming such training programs. The provision of annual AI training varies by company size: 79% of medium-sized companies offer it, compared to 59% of large companies and 41% of small companies. These results suggest a growing awareness of the need for employee education in AI safety and responsible practices.

    However, the binary nature of the question means these figures indicate the presence of annual training, but do not detail its scope, quality, or the resources allocated. The differences observed across company sizes may reflect varying levels of resources, regulatory pressures, or the perceived immediacy of needing formal AI training programs.

    Familiarity with key AI-related regulations and industry standards within development teams appears to be limited. Among all respondents, the NIST AI Risk Management Framework (cited by 30%) and state-level Consumer Privacy Acts (29%) were the most frequently acknowledged areas of education. Technical leaders generally reported higher levels of awareness; for instance, 40% indicated education on the NIST AI Risk Management Framework and 31% on ISO standards such as ISO 42001 or ISO 23894. Conversely, education regarding specific deepfake legislation was the least prevalent, noted by only 17% of both all respondents and technical leaders. Regarding company size, smaller organizations generally reported lower levels of education on these topics, though they showed the highest awareness of deepfake laws (26%).

    Medium-sized and large companies typically indicated greater, often comparable, rates of education, particularly for established frameworks like NIST and major regulations such as the EU AI Act for larger enterprises. These figures suggest that while certain foundational frameworks are gaining some traction, comprehensive education on the evolving landscape of AI governance is still in its early stages. The greater reported awareness among technical leaders may reflect their direct responsibilities in implementing compliant systems, while variations by company size could indicate differences in resources allocated to training or the perceived immediacy of regulatory pressures.

    Over two-thirds (68%) of organizations report having a process for staying informed about evolving AI regulations and standards. Interestingly, a slightly smaller proportion of technical leaders (62%) confirmed such processes, compared to 73% of those in other roles. The existence of these formal processes varies with company size: 72% of large and 70% of medium-sized companies have them, while this figure drops to 51% for small companies.

    These findings suggest a widespread recognition of the need to keep pace with the dynamic regulatory landscape surrounding AI. However, as the question is binary, these responses indicate the presence of some sort of process but do not provide insight into its rigor, effectiveness, or the resources dedicated to it. The variations, particularly the lower affirmation from technical leaders and small companies, might reflect differences in how formally these tracking mechanisms are embedded, or the perceived urgency and capacity to maintain them.

    Current Practices and Response Capabilities

    Slightly more than half (54%) of organizations report having an AI Incident Response playbook, a figure that remains consistent for technical leaders (53%) and those in other roles (54%). Adoption of such playbooks varies with company size: 62% of medium-sized companies affirm they have one, compared to 51% of large companies and only 36% of small companies. These findings warrant careful interpretation, as the binary yes/no question reveals nothing about the depth or comprehensiveness of these playbooks.

    Anecdotal evidence suggests that while many organizations may claim to have AI incident response protocols, very few have developed the thorough, multi-faceted frameworks necessary to address the unique challenges of AI incidents – which can range from biased outputs and privacy violations to model manipulation and data leakage. Organizations seeking to develop robust AI incident response capabilities should note that OWASP has recently published valuable resources in this area, including guidance on AI-specific threats and mitigations that go well beyond traditional cybersecurity playbooks.

    The adoption of specific AI governance practices varies considerably among organizations. For all respondents, monitoring AI systems in production (48%) and establishing a risk evaluation process for AI projects (45%) are the most frequently implemented measures. Technical leaders report even higher engagement in these areas, with 55% overseeing AI system monitoring and 47% implementing risk evaluations. Conversely, broader organizational practices such as regular AI literacy training (22% for all respondents, but notably only 8% for technical leaders) and tools for AI incident reporting (16% for all, 13% for technical leaders) see much lower uptake. Generally, larger organizations report more widespread implementation of these governance measures, particularly in monitoring and risk assessment, while smaller firms often show lower adoption rates, though they indicate comparable or slightly higher use of tools for AI incident reporting and inventorying systems.

    These figures suggest that organizations are currently prioritizing operational and risk-mitigation aspects of AI governance directly related to system performance and project viability. The comparatively lower implementation of tools for incident reporting, system inventorying, and especially AI literacy training with technical leaders reporting notably low engagement in the latter for their teams may indicate that these foundational governance elements are still developing, are perceived as less immediately critical than direct technical oversight, or are considered the responsibility of other organizational units.

    Key Challenges in AI Governance

    The primary obstacle to advancing AI governance, cited by 45% of all respondents, is the prioritization of speed to market over governance concerns. This sentiment is even more pronounced among technical leaders, 56% of whom identify it as a key limiting factor. For all participants, a lack of budget or allocated resources (34%) and insufficient internal knowledge (33%) also rank as significant impediments. Technical leaders additionally highlighted a lack of executive sponsorship or prioritization (33%) more frequently than the overall respondent pool.

    Across different company sizes, the pressure to prioritize speed to market remains a dominant theme, particularly for small (54%) and large (49%) organizations. Small and medium-sized firms more frequently cited a lack of budget (40% and 39% respectively), while large companies reported a greater challenge with a lack of internal knowledge (42%) compared to their smaller counterparts.

    These findings suggest a common tension between the rapid deployment of AI technologies and the establishment of robust governance frameworks. The consistent emphasis on speed, coupled with resource constraints and, for some, a lack of top-level backing, underscores the practical difficulties organizations face in embedding comprehensive AI governance amidst competitive pressures and evolving internal capabilities.

    The Path Forward: Integrating Governance with AI Ambition

    The survey reveals an industry caught between ambition and operational reality. While organizations race to deploy generative AI systems, the infrastructure needed to manage these deployments safely lags behind. The data exposes a fundamental tension: teams recognize governance matters, yet when faced with market pressure, nearly half admit to prioritizing speed over safety. This trade-off becomes particularly acute for technical leaders, who report the highest pressure to deliver quickly while simultaneously planning the most aggressive deployment schedules. The divide between large and small organizations presents systemic risk.

    While enterprises build comprehensive governance frameworks, small companies operate with minimal oversight; only 29% monitor their production AI systems and just 36% have dedicated governance roles. This creates vulnerabilities across the ecosystem, as AI failures at any scale can trigger regulatory scrutiny and erode customer trust. The widespread lack of AI-specific incident response capabilities suggests many organizations remain unprepared for failure modes that traditional IT playbooks don’t address.

    For teams building AI applications, the path forward requires treating governance as integral to the development process rather than a compliance afterthought. The most successful organizations—those managing multiple production deployments have embedded monitoring, risk assessment, and incident response directly into their engineering workflows.

    They recognize that automated governance checks and frameworks appropriate to their scale can actually accelerate deployment by reducing the risk of production failures. Organizations that thrive will be those that reject the false choice between innovation and responsibility. By instrumenting models for observability before deployment, establishing clear ownership structures, and developing AI-specific monitoring capabilities, teams can move faster while managing risk more effectively. As AI systems become more powerful and customer-facing, demonstrating responsible deployment will become as important as model performance in winning customer trust and avoiding regulatory sanctions.

    Acknowledgements

    Thanks to Pacific AI for sponsoring this survey. This survey was conducted by Gradient Flow; see our Statement of Editorial Independence.