Automating AI Governance for Healthcare Applications of Generative AI

Organizations that develop or deploy Generative AI solutions in healthcare are subject to more than 70 national and state laws, regulatory rules, and industry standards. Once an organization establishes an AI Governance framework, its policies will include dozens of controls that must be implemented for each AI project. This session describes a subset of these controls that can be automated with current tools:

  • Automated execution of medical LLM benchmarks during system testing and when monitoring in production, including coverage of medical ethics, medical errors, fairness and equity, safety and reliability – using Pacific AI
  • Automating generation and executing of LLM test suites for custom solutions, including testing for robustness, bias, fairness, representation, and accuracy – using LangTest
  • Automated generation of model cards, complying with transparency laws and including explained benchmark results – based on the CHAI draft model card standard.

This approach supports broader generative ai governance efforts by promoting accountability, reproducibility, and compliance in the deployment of AI models.

FAQ

What does “automated AI governance” mean for healthcare generative AI systems?

It means deploying tools and policies that automatically track AI usage, evaluate outputs, detect risks like privacy breaches or hallucinations, and enforce human-in-the-loop review to maintain trust and compliance.


How can hospitals monitor generative AI performance in real time?

Real-time dashboards can track metrics like accuracy, sensitivity, and hallucination rate. Alerts get triggered when performance drops or outputs deviate from expectations.


What role does human oversight play in automated governance?

While automation streamlines review via triage, human experts still validate high-risk cases, refine policies, address false positives, and ensure accountability.


What are typical challenges in automating governance for clinical generative AI?

Common challenges include integrating with legacy EHR systems, defining thresholds for alerts, handling diverse data formats, and securing computing infrastructure.


How can organizations start implementing automated governance for generative AI?

Begin with a small pilot in a defined clinical workflow, use modular monitoring tools, train staff on governance protocols, and iterate using feedback loops and performance data.

Reliable and verified information compiled by our editorial and professional team. Pacific AI Editorial Policy.

About the speakers
David Talby
CEO, Pacific AI

David Talby is a CEO at Pacific AI and John Snow Labs, helping healthcare & life science companies put AI to good use. He has extensive experience building and running web-scale software platforms and teams – in startups, for Microsoft’s Bing in the US and Europe, and to scale Amazon’s financial systems in Seattle and the UK.

David holds a PhD in computer science and master’s degrees in both computer science and business administration.

Ben Webster
Vice President of AI Solutions at NLPLogix

Ben joined NLP Logix 11 years ago as the company’s first employee. During this time, he has led machine learning projects across various industries. Notably, he has played a key role in automating audio and language processes for the largest healthcare patient survey provider in the U.S. and delivering AI-driven solutions for multiple organizations within the human resources sector.

Ben’s groundbreaking work in language modeling earned him a patent, further cementing his position as a leader in the AI field. Ben is an active community advocate, particularly in educational initiatives. He volunteers for programs like Data Science for Social Good, NLP Logix’s Analytics Boot Camp, and collaborates with Edward Waters University to help shape the growth of its technology department. In 2023, he was named an Ultimate Tech Leader by the Jacksonville Business Journal.

AI Governance Simplified: Unifying 70+ laws, regulations, and standards Into a Policy Suite

Organizations who are either AI developers or AI deployers are under growing legal liability risk from multiple sources: National laws like Title VII of the Civil Rights act and Titles...