{"id":1005,"date":"2025-06-09T16:16:40","date_gmt":"2025-06-09T16:16:40","guid":{"rendered":"https:\/\/pacific.ai\/staging\/3667\/?p=1005"},"modified":"2026-02-19T09:09:59","modified_gmt":"2026-02-19T09:09:59","slug":"healthcare-ai-governance-a-review-of-evaluation-frameworks-part-2","status":"publish","type":"post","link":"https:\/\/pacific.ai\/staging\/3667\/healthcare-ai-governance-a-review-of-evaluation-frameworks-part-2\/","title":{"rendered":"Healthcare AI Safety: A Review of Evaluation Frameworks \u2013 Part 2"},"content":{"rendered":"<div id=\"bsf_rt_marker\"><\/div><p><span style=\"font-weight: 400;\">In the first installment of this series, we explored a set of prominent frameworks that have shaped how we evaluate and govern AI in healthcare. Since then, a number of new resources\u2014spanning global health organizations, academic collaborations, and policy groups\u2014have contributed critical perspectives to the conversation. In this second part, we build on our initial review by highlighting a new set of frameworks and research that help advance <\/span>healthcare AI safety<span style=\"font-weight: 400;\"> and responsible AI governance across the healthcare lifecycle.<\/span><\/p>\n<h2>1. WHO Framework for AI-Based Medical Devices<\/h2>\n<p>The World Health Organization\u2019s <a href=\"https:\/\/www.who.int\/publications\/i\/item\/9789240073702\" target=\"_blank\" rel=\"noopener\">\u201cGenerating Evidence for Artificial Intelligence Based Medical Devices\u201d<\/a> provides a detailed framework for evaluating AI-based tools across training, validation, and clinical use. It emphasizes contextual relevance, clinical performance, and regulatory preparedness. This resource is especially valuable for developers working at the intersection of AI and medical device regulation.<\/p>\n<h2>2. CRAFT-MD: Conversational Reasoning Assessment Framework for Testing in Medicine<\/h2>\n<p>CRAFT-MD focuses on evaluating AI systems\u2014particularly large language models\u2014in medical reasoning tasks. It introduces a novel assessment protocol that centers on clinical reasoning quality, offering a more structured and clinically meaningful alternative to traditional accuracy metrics. This is a critical step forward for <a href=\"https:\/\/pacific.ai\/staging\/3667\/what-is-governance-for-generative-ai\/\">generative AI governance<\/a> in healthcare.<\/p>\n<h2>3. HAIRA: A Maturity Model for AI Governance<\/h2>\n<p>The HAIRA framework proposes a governance maturity model tailored to healthcare AI systems. Developed through a systematic review, HAIRA offers a staged approach to governance, allowing organizations to benchmark progress across policy, transparency, stakeholder engagement, and <a title=\"ai risk management\" href=\"https:\/\/pacific.ai\/staging\/3667\/ai-risk-management-audit\/\">risk management<\/a>.<\/p>\n<h2>4. TPLC: Total Product Lifecycle<\/h2>\n<p><span style=\"font-weight: 400;\">Adopted from FDA guidance, the Total Product Lifecycle framework recognizes that AI systems in healthcare are not static. TPLC provides a structure for managing performance, safety, and oversight across pre-market, deployment, and post-market phases. It aligns well with the needs of continuously learning AI systems and supports adaptive regulatory mechanisms, making it an important model for strengthening <\/span>healthcare AI safety<span style=\"font-weight: 400;\">.<\/span><\/p>\n<h2>5. AIGG: Aotearoa New Zealand\u2019s Approach to AI Governance<\/h2>\n<p>AIGG serves as a national case study in implementing <a title=\"What is AI governance for healthcare\" href=\"https:\/\/pacific.ai\/staging\/3667\/healthcare-ai-governance-features-and-benefits-that-drive-safe-ai-use\/\">AI governance in healthcare<\/a> systems. It focuses on principles like M\u0101ori data sovereignty, equity, and transparency, demonstrating how cultural context can be embedded into governance frameworks in meaningful and actionable ways.<\/p>\n<h2>6. SPIRIT-AI: Clinical Trial Design for AI Interventions<\/h2>\n<p><span style=\"font-weight: 400;\">The SPIRIT-AI extension provides detailed guidelines for clinical trial protocols involving AI-based interventions. As clinical validation becomes a requirement for many healthcare AI tools, SPIRIT-AI offers essential direction for designing trials that meet regulatory and scientific standards, strengthening overall <\/span>healthcare AI safety<span style=\"font-weight: 400;\"> in real-world deployment.<\/span><\/p>\n<h2>7. OPTICA: Evaluation in Health Organizations<\/h2>\n<p>OPTICA offers a real-world evaluation framework focused on how AI solutions are integrated, adopted, and assessed within healthcare organizations. It considers not only technical performance but also organizational readiness, workflow impact, and clinician trust.<\/p>\n<h2>8. SALIENT: End-to-End Clinical AI Implementation<\/h2>\n<p>SALIENT stands for &#8220;Systematic Approaches to Learning, Implementation, Evaluation and Translation.&#8221; This framework fills a critical gap by offering an end-to-end <a title=\"ai governance implementation\" href=\"https:\/\/pacific.ai\/staging\/3667\/ai-governance-implementation\/\">implementation<\/a> guide for clinical AI, from early development through to scaling and post-deployment monitoring.<\/p>\n<h2>9. JAMA Principles for Addressing Algorithmic Bias<\/h2>\n<p>In a 2022 article, JAMA authors proposed a set of guiding principles to address how algorithmic bias may reinforce racial and ethnic disparities in care. These include incorporating equity <a title=\"AI audit\" href=\"https:\/\/pacific.ai\/staging\/3667\/what-is-a-responsible-ai-audit\/\">audits<\/a>, data diversity assessments, and community engagement into the AI development lifecycle.<\/p>\n<h2>10. Health Equity and Racial Justice Integration (Project MUSE)<\/h2>\n<p><span style=\"font-weight: 400;\">This framework, published in Project MUSE, proposes embedding health equity and racial justice principles throughout the AI lifecycle\u2014from data selection to algorithmic impact assessment. It offers a compelling argument for redefining success metrics beyond technical accuracy, emphasizing the role of equity as a core pillar of <\/span>healthcare AI safety<span style=\"font-weight: 400;\">.<\/span><\/p>\n<h2>11. NPJ Digital Medicine: Prospective AI Evaluation<\/h2>\n<p>Coombs et al. (2022) demonstrate a machine learning framework that supports real-time clinical decision-making in oncology. Their approach shows how AI evaluation can be structured prospectively, enabling tighter alignment between model predictions and patient outcomes in practice.<\/p>\n<h2>12. Guidelines for Prediction Models (de Hond et al.)<\/h2>\n<p><span style=\"font-weight: 400;\">A comprehensive scoping review by de Hond et al. outlines quality criteria for AI-based prediction models in healthcare. These guidelines cover everything from dataset construction to performance reporting and validation standards, helping raise the bar for transparency and reproducibility &#8211; both essential to strengthening <\/span>healthcare AI safety<span style=\"font-weight: 400;\">.<\/span><\/p>\n<h2>13. EDI in AI Lifecycle (Nyariro et al.)<\/h2>\n<p>This scoping review protocol from BMJ Open provides a blueprint for how to incorporate Equity, Diversity, and Inclusion (EDI) principles throughout the AI lifecycle. From stakeholder representation to algorithmic fairness, it offers a structured path for making healthcare AI more inclusive.<\/p>\n<h2>14. Do No Harm Roadmap (Wiens et al.)<\/h2>\n<p><span style=\"font-weight: 400;\">Finally, Wiens et al.\u2019s widely cited paper from <\/span><i><span style=\"font-weight: 400;\">Nature Medicine<\/span><\/i><span style=\"font-weight: 400;\"> introduces a \u201cDo No Harm\u201d roadmap for responsible ML in healthcare. The roadmap provides pragmatic guidance for bias mitigation, model robustness, and post-deployment monitoring\u2014anchoring responsible AI development in clinical realities and reinforcing the importance of <\/span>healthcare AI safety<span style=\"font-weight: 400;\">.<\/span><\/p>\n<h2>Where Do We Go From Here?<\/h2>\n<p>As the governance landscape continues to evolve, these new frameworks signal a shift toward more holistic, lifecycle-aware, and equity-driven approaches. Whether you\u2019re a developer, clinician, policymaker, or researcher, there\u2019s growing consensus that healthcare AI must be safe, effective, and just\u2014not just in principle, but in practice. The June 2025 edition of the Pacific AI Policy Suite covers the unified recommendations from all of the above policies \u2013 providing you with an actionable set of <a title=\"Governor: Your AI Control Tower\" href=\"https:\/\/pacific.ai\/staging\/3667\/governor\/\">controls<\/a> and best practices you can apply today. We will continue to <a title=\"2025 AI Governance Survey\" href=\"https:\/\/pacific.ai\/staging\/3667\/2025-ai-governance-survey-reveals-critical-gaps-between-ai-ambition-and-operational-readiness\/\">survey<\/a> the developments and publications in this space, so that you can rest assured that the policies and tools you adopt will always represent the cumulative published knowledge of the AI governance community.<\/p>\n<p><a title=\"healthcare ai governance\" href=\"https:\/\/pacific.ai\/staging\/3667\/healthcare-ai-governance-features-and-benefits-that-drive-safe-ai-use\/\">Healthcare AI Governance<\/a>: A Review of Evaluation Frameworks &#8211; <a href=\"https:\/\/pacific.ai\/staging\/3667\/healthcare-ai-governance-a-review-of-evaluation-frameworks\/\"><strong>Part 1<\/strong><\/a><\/p>\n<h2>FAQ<\/h2>\n<p><strong>What is HAIRA and how does it help assess AI governance maturity in healthcare?<\/strong><\/p>\n<p>HAIRA (Healthcare AI governance Readiness Assessment) is a five-level maturity model that guides organizations through progressive governance improvements by evaluating policy, risk protocols, monitoring, and stakeholder engagement.<\/p>\n<p><strong>How does the WHO framework contribute to healthcare AI governance?<\/strong><\/p>\n<p>WHO&#8217;s guidance (2021) sets ethical core principles\u2014like transparency, human rights, and accountability\u2014and provides recommendations to embed governance and ethics into design, deployment, and oversight of AI health systems.<\/p>\n<p><strong>What is CRAFT\u2011MD and why is it important for conversational AI in medicine?<\/strong><\/p>\n<p>CRAFT\u2011MD (Conversational Reasoning Assessment Framework for Testing in Medicine) evaluates LLMs\u2019 clinical reasoning in dialogue settings, highlighting that AI often fails at dynamic back-and-forth despite high exam-based performance.<\/p>\n<p><strong>Why is Total Product Lifecycle (TPLC) used for AI-based medical devices?<\/strong><\/p>\n<p>TPLC applies a continuous oversight model\u2014covering pre-market development, validation, deployment, and post-market monitoring\u2014ensuring ongoing safety, performance tracking, and regulatory compliance.<\/p>\n<p><strong>How do frameworks like SPIRIT\u2011AI or CONSORT\u2011AI support clinical AI implementation?<\/strong><\/p>\n<p>SPIRIT\u2011AI and CONSORT\u2011AI provide structured guidelines for AI clinical trial reporting (design, evaluation, transparency), while CLAIM and STARD\u2011AI enhance diagnostic and imaging AI standards.<\/p>\n\n\n<script type=\"application\/ld+json\">\n{\n  \"@context\": \"https:\/\/schema.org\",\n  \"@type\": \"FAQPage\",\n  \"mainEntity\": [\n    {\n      \"@type\": \"Question\",\n      \"name\": \"What percentage of organizations have formal AI governance policies?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"About 75 % of respondents reported having AI usage policies in place, but only 59 % have dedicated governance roles and just 54 % maintain AI-specific incident-response playbooks.\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"How many companies have actually deployed generative AI systems in production?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Only 30 % of organizations have deployed generative AI to production, and just 13 % manage multiple deployments.\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"What governance challenges are most pronounced in smaller organizations?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Small firms lag behind: only 36 % have governance officers (versus 62\u201364 % at larger firms), 41 % offer annual AI training (vs. 59\u201379 %), and just 14 % are familiar with major frameworks like NIST AI RMF.\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"Why do organizations prioritize speed over governance in AI adoption?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Speed-to-market was cited by 45 % of all respondents\u2014and 56 % of technical leaders\u2014as the primary barrier, often leading to underinvested governance frameworks.\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"What are the main gaps in monitoring AI systems post-deployment?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Fewer than 48 % of organizations actively monitor AI for accuracy, misuse, or drift, and small firms perform even worse in these critical oversight practices.\"\n      }\n    }\n  ]\n}\n<\/script>\n","protected":false},"excerpt":{"rendered":"<p>In the first installment of this series, we explored a set of prominent frameworks that have shaped how we evaluate and govern AI in healthcare. Since then, a number of new resources\u2014spanning global health organizations, academic collaborations, and policy groups\u2014have contributed critical perspectives to the conversation. In this second part, we build on our initial [&hellip;]<\/p>\n","protected":false},"author":5,"featured_media":474,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"nf_dc_page":"","content-type":"","inline_featured_image":false,"footnotes":""},"categories":[118],"tags":[],"class_list":["post-1005","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-articles"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Healthcare AI Safety: A Review of Evaluation Frameworks \u2013 Part 2 - Pacific AI<\/title>\n<meta name=\"description\" content=\"Review key evaluation frameworks in healthcare AI governance. Part 2 explores practical tools and methods for assessing risk, safety, and regulatory alignment.\" \/>\n<meta name=\"robots\" content=\"noindex, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Healthcare AI Safety: A Review of Evaluation Frameworks \u2013 Part 2 - Pacific AI\" \/>\n<meta property=\"og:description\" content=\"Review key evaluation frameworks in healthcare AI governance. Part 2 explores practical tools and methods for assessing risk, safety, and regulatory alignment.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/pacific.ai\/healthcare-ai-governance-a-review-of-evaluation-frameworks-part-2\/\" \/>\n<meta property=\"og:site_name\" content=\"Pacific AI\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/people\/Pacific-AI\/61566807347567\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-06-09T16:16:40+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-02-19T09:09:59+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/pacific.ai\/wp-content\/uploads\/2025\/03\/Healthcare_AI_Governance.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"550\" \/>\n\t<meta property=\"og:image:height\" content=\"440\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Maria Baranchikova\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Maria Baranchikova\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/healthcare-ai-governance-a-review-of-evaluation-frameworks-part-2\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/healthcare-ai-governance-a-review-of-evaluation-frameworks-part-2\\\/\"},\"author\":{\"name\":\"Maria Baranchikova\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#\\\/schema\\\/person\\\/7999dc7631dff290633e07875c7046b3\"},\"headline\":\"Healthcare AI Safety: A Review of Evaluation Frameworks \u2013 Part 2\",\"datePublished\":\"2025-06-09T16:16:40+00:00\",\"dateModified\":\"2026-02-19T09:09:59+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/healthcare-ai-governance-a-review-of-evaluation-frameworks-part-2\\\/\"},\"wordCount\":1138,\"publisher\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/healthcare-ai-governance-a-review-of-evaluation-frameworks-part-2\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/03\\\/Healthcare_AI_Governance.webp\",\"articleSection\":[\"Articles\"],\"inLanguage\":\"en\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/healthcare-ai-governance-a-review-of-evaluation-frameworks-part-2\\\/\",\"url\":\"https:\\\/\\\/pacific.ai\\\/healthcare-ai-governance-a-review-of-evaluation-frameworks-part-2\\\/\",\"name\":\"Healthcare AI Safety: A Review of Evaluation Frameworks \u2013 Part 2 - Pacific AI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/healthcare-ai-governance-a-review-of-evaluation-frameworks-part-2\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/healthcare-ai-governance-a-review-of-evaluation-frameworks-part-2\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/03\\\/Healthcare_AI_Governance.webp\",\"datePublished\":\"2025-06-09T16:16:40+00:00\",\"dateModified\":\"2026-02-19T09:09:59+00:00\",\"description\":\"Review key evaluation frameworks in healthcare AI governance. Part 2 explores practical tools and methods for assessing risk, safety, and regulatory alignment.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/healthcare-ai-governance-a-review-of-evaluation-frameworks-part-2\\\/#breadcrumb\"},\"inLanguage\":\"en\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/pacific.ai\\\/healthcare-ai-governance-a-review-of-evaluation-frameworks-part-2\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/healthcare-ai-governance-a-review-of-evaluation-frameworks-part-2\\\/#primaryimage\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/03\\\/Healthcare_AI_Governance.webp\",\"contentUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/03\\\/Healthcare_AI_Governance.webp\",\"width\":550,\"height\":440,\"caption\":\"Healthcare AI safety frameworks illustrated by a secure lock and key, surrounded by labels such as FDA, WHO, CHAI, CLAIM, CRAFT-MD, and STARD-AI, representing regulatory and evaluation standards for trustworthy clinical AI.\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/healthcare-ai-governance-a-review-of-evaluation-frameworks-part-2\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/pacific.ai\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Healthcare AI Safety: A Review of Evaluation Frameworks \u2013 Part 2\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#website\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/\",\"name\":\"Pacific AI\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#organization\",\"name\":\"Pacific AI\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/06\\\/site_logo.svg\",\"contentUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/06\\\/site_logo.svg\",\"width\":182,\"height\":41,\"caption\":\"Pacific AI\"},\"image\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/people\\\/Pacific-AI\\\/61566807347567\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/pacific-ai\\\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#\\\/schema\\\/person\\\/7999dc7631dff290633e07875c7046b3\",\"name\":\"Maria Baranchikova\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/03\\\/Mariya-96x96.webp\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/03\\\/Mariya-96x96.webp\",\"contentUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/03\\\/Mariya-96x96.webp\",\"caption\":\"Maria Baranchikova\"},\"description\":\"Maria is a Lead Legal Counsel at John Snow Labs and Pacific AI. She is an experienced IT Attorney specializing in Legal AI and AI Governance. Maria has advanced degrees in International Private Law and International Property Law, as well as certifications in Digital Transformation and LegalTech.\",\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/baranchikova\\\/\"],\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/author\\\/maria\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Healthcare AI Safety: A Review of Evaluation Frameworks \u2013 Part 2 - Pacific AI","description":"Review key evaluation frameworks in healthcare AI governance. Part 2 explores practical tools and methods for assessing risk, safety, and regulatory alignment.","robots":{"index":"noindex","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"og_locale":"en_US","og_type":"article","og_title":"Healthcare AI Safety: A Review of Evaluation Frameworks \u2013 Part 2 - Pacific AI","og_description":"Review key evaluation frameworks in healthcare AI governance. Part 2 explores practical tools and methods for assessing risk, safety, and regulatory alignment.","og_url":"https:\/\/pacific.ai\/healthcare-ai-governance-a-review-of-evaluation-frameworks-part-2\/","og_site_name":"Pacific AI","article_publisher":"https:\/\/www.facebook.com\/people\/Pacific-AI\/61566807347567\/","article_published_time":"2025-06-09T16:16:40+00:00","article_modified_time":"2026-02-19T09:09:59+00:00","og_image":[{"width":550,"height":440,"url":"https:\/\/pacific.ai\/wp-content\/uploads\/2025\/03\/Healthcare_AI_Governance.webp","type":"image\/webp"}],"author":"Maria Baranchikova","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Maria Baranchikova","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/pacific.ai\/healthcare-ai-governance-a-review-of-evaluation-frameworks-part-2\/#article","isPartOf":{"@id":"https:\/\/pacific.ai\/healthcare-ai-governance-a-review-of-evaluation-frameworks-part-2\/"},"author":{"name":"Maria Baranchikova","@id":"https:\/\/pacific.ai\/staging\/3667\/#\/schema\/person\/7999dc7631dff290633e07875c7046b3"},"headline":"Healthcare AI Safety: A Review of Evaluation Frameworks \u2013 Part 2","datePublished":"2025-06-09T16:16:40+00:00","dateModified":"2026-02-19T09:09:59+00:00","mainEntityOfPage":{"@id":"https:\/\/pacific.ai\/healthcare-ai-governance-a-review-of-evaluation-frameworks-part-2\/"},"wordCount":1138,"publisher":{"@id":"https:\/\/pacific.ai\/staging\/3667\/#organization"},"image":{"@id":"https:\/\/pacific.ai\/healthcare-ai-governance-a-review-of-evaluation-frameworks-part-2\/#primaryimage"},"thumbnailUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/03\/Healthcare_AI_Governance.webp","articleSection":["Articles"],"inLanguage":"en"},{"@type":"WebPage","@id":"https:\/\/pacific.ai\/healthcare-ai-governance-a-review-of-evaluation-frameworks-part-2\/","url":"https:\/\/pacific.ai\/healthcare-ai-governance-a-review-of-evaluation-frameworks-part-2\/","name":"Healthcare AI Safety: A Review of Evaluation Frameworks \u2013 Part 2 - Pacific AI","isPartOf":{"@id":"https:\/\/pacific.ai\/staging\/3667\/#website"},"primaryImageOfPage":{"@id":"https:\/\/pacific.ai\/healthcare-ai-governance-a-review-of-evaluation-frameworks-part-2\/#primaryimage"},"image":{"@id":"https:\/\/pacific.ai\/healthcare-ai-governance-a-review-of-evaluation-frameworks-part-2\/#primaryimage"},"thumbnailUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/03\/Healthcare_AI_Governance.webp","datePublished":"2025-06-09T16:16:40+00:00","dateModified":"2026-02-19T09:09:59+00:00","description":"Review key evaluation frameworks in healthcare AI governance. Part 2 explores practical tools and methods for assessing risk, safety, and regulatory alignment.","breadcrumb":{"@id":"https:\/\/pacific.ai\/healthcare-ai-governance-a-review-of-evaluation-frameworks-part-2\/#breadcrumb"},"inLanguage":"en","potentialAction":[{"@type":"ReadAction","target":["https:\/\/pacific.ai\/healthcare-ai-governance-a-review-of-evaluation-frameworks-part-2\/"]}]},{"@type":"ImageObject","inLanguage":"en","@id":"https:\/\/pacific.ai\/healthcare-ai-governance-a-review-of-evaluation-frameworks-part-2\/#primaryimage","url":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/03\/Healthcare_AI_Governance.webp","contentUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/03\/Healthcare_AI_Governance.webp","width":550,"height":440,"caption":"Healthcare AI safety frameworks illustrated by a secure lock and key, surrounded by labels such as FDA, WHO, CHAI, CLAIM, CRAFT-MD, and STARD-AI, representing regulatory and evaluation standards for trustworthy clinical AI."},{"@type":"BreadcrumbList","@id":"https:\/\/pacific.ai\/healthcare-ai-governance-a-review-of-evaluation-frameworks-part-2\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/pacific.ai\/"},{"@type":"ListItem","position":2,"name":"Healthcare AI Safety: A Review of Evaluation Frameworks \u2013 Part 2"}]},{"@type":"WebSite","@id":"https:\/\/pacific.ai\/staging\/3667\/#website","url":"https:\/\/pacific.ai\/staging\/3667\/","name":"Pacific AI","description":"","publisher":{"@id":"https:\/\/pacific.ai\/staging\/3667\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/pacific.ai\/staging\/3667\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en"},{"@type":"Organization","@id":"https:\/\/pacific.ai\/staging\/3667\/#organization","name":"Pacific AI","url":"https:\/\/pacific.ai\/staging\/3667\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/pacific.ai\/staging\/3667\/#\/schema\/logo\/image\/","url":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/06\/site_logo.svg","contentUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/06\/site_logo.svg","width":182,"height":41,"caption":"Pacific AI"},"image":{"@id":"https:\/\/pacific.ai\/staging\/3667\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/people\/Pacific-AI\/61566807347567\/","https:\/\/www.linkedin.com\/company\/pacific-ai\/"]},{"@type":"Person","@id":"https:\/\/pacific.ai\/staging\/3667\/#\/schema\/person\/7999dc7631dff290633e07875c7046b3","name":"Maria Baranchikova","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/03\/Mariya-96x96.webp","url":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/03\/Mariya-96x96.webp","contentUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/03\/Mariya-96x96.webp","caption":"Maria Baranchikova"},"description":"Maria is a Lead Legal Counsel at John Snow Labs and Pacific AI. She is an experienced IT Attorney specializing in Legal AI and AI Governance. Maria has advanced degrees in International Private Law and International Property Law, as well as certifications in Digital Transformation and LegalTech.","sameAs":["https:\/\/www.linkedin.com\/in\/baranchikova\/"],"url":"https:\/\/pacific.ai\/staging\/3667\/author\/maria\/"}]}},"_links":{"self":[{"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/posts\/1005","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/comments?post=1005"}],"version-history":[{"count":16,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/posts\/1005\/revisions"}],"predecessor-version":[{"id":2036,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/posts\/1005\/revisions\/2036"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/media\/474"}],"wp:attachment":[{"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/media?parent=1005"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/categories?post=1005"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/tags?post=1005"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}