{"id":1304,"date":"2025-07-25T09:44:28","date_gmt":"2025-07-25T09:44:28","guid":{"rendered":"https:\/\/pacific.ai\/staging\/3667\/?p=1304"},"modified":"2026-02-19T08:41:20","modified_gmt":"2026-02-19T08:41:20","slug":"healthcare-specific-red-teaming","status":"publish","type":"post","link":"https:\/\/pacific.ai\/staging\/3667\/healthcare-specific-red-teaming\/","title":{"rendered":"Healthcare-Specific Red Teaming"},"content":{"rendered":"<div id=\"bsf_rt_marker\"><\/div>\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"Healthcare-Specific Red Teaming of Medical Generative AI Apps\" width=\"580\" height=\"326\" src=\"https:\/\/www.youtube.com\/embed\/NKxk7qOdAfE?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<p>Large language models (LLMs) hold immense promise for advancing clinical workflows, yet their deployment in healthcare raises critical safety, ethical, and bias-related concerns that exceed the scope of standard red\u2011teaming practices. In this talk, we first review the fundamentals of general\u2011purpose LLM red teaming\u2014targeting misinformation, offensive speech, security exploits, private\u2011data leakage, discrimination, prompt injection, and jailbreaking vulnerabilities. Building on these foundations, we then describe two healthcare\u2011specific extensions developed by Pacific AI:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Medical Ethics Red Teaming<\/strong> <br>We introduce novel test cases derived from core AMA medical\u2011ethics principles to probe LLM behaviors around physician misconduct, patient autonomy and consent, conflicts of interest, and stigmatizing language. Examples include attempts to coerce consent for unnecessary procedures, fabricate arguments for upcoding, and manipulate clinical documentation for financial gain.<\/li>\n\n\n\n<li><strong>Cognitive\u2011Bias Red Teaming<\/strong> <br>We demonstrate targeted benchmarks designed to elicit and measure clinically dangerous biases such as anchoring, confirmation, framing, primacy\/recency effects, and ideological alignment, that can distort diagnostic reasoning and treatment recommendations. Through scenario\u2011based assessments (e.g., risk \u2011communication framing, order\u2011set anchoring), we quantify model susceptibility to contextual and statistical framing errors in healthcare contexts.<\/li>\n<\/ol>\n\n\n\n<p>This webinar is designed for healthcare technology leaders, clinical AI researchers, and compliance officers seeking practical guidance on evaluating and governing AI tools; attendees will learn actionable red\u2011teaming strategies and receive ready\u2011to\u2011implement test cases to bolster model safety, ethics compliance, and bias mitigation in clinical settings.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">FAQ<\/h2>\n\n\n\n<p><strong>What makes red teaming in healthcare AI different from other sectors? <\/strong><\/p>\n\n\n\n<p>Healthcare AI demands protection against risks like data privacy breaches, harmful clinical advice, mis-interpretation of medical content, and hallucinations. Unlike generic AI systems, testing must account for high stakes, patient safety, and domain-specific failures.<\/p>\n\n\n\n<p><br><strong>Who should be involved in healthcare AI red teaming? <\/strong><\/p>\n\n\n\n<p>Effective red teams combine clinicians and AI engineers. Clinician expertise is crucial to spot unsafe or misleading outputs in clinical contexts, which may be missed by purely technical review.<\/p>\n\n\n\n<p><br><strong>What vulnerabilities are commonly uncovered in healthcare LLMs during red teaming? <\/strong><\/p>\n\n\n\n<p>Dynamic healthcare red-teaming has exposed high failure rates: despite models achieving over 80% MedQA accuracy, up to 94% fail robustness tests, 86% leak private information, 81% display bias, and 66% hallucinate in adversarial scenarios.<\/p>\n\n\n\n<p><br><strong>What frameworks support structured red teaming for clinical AI? <\/strong><\/p>\n\n\n\n<p>The proposed PIEE framework offers a structured, multi-phase process for clinical AI red teaming\u2014designed to be accessible to both clinicians and informaticians, enabling collaboration without requiring deep AI expertise.<\/p>\n\n\n\n<p><br><strong>Why is dynamic, automated red teaming critical for healthcare AI? <\/strong><\/p>\n\n\n\n<p>Static benchmarks quickly become outdated and may miss real-world vulnerabilities. Dynamic, automated red-teaming\u2014using evolving adversarial agents\u2014continuously stress-tests systems for risks including privacy leaks, unfair bias, and hallucinations\u2014capturing emergent threats in real time.<\/p>\n\n\n\n<script type=\"application\/ld+json\">\n{\n  \"@context\": \"https:\/\/schema.org\",\n  \"@type\": \"FAQPage\",\n  \"mainEntity\": [\n    {\n      \"@type\": \"Question\",\n      \"name\": \"What makes red teaming in healthcare AI different from other sectors?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Healthcare AI demands protection against risks like data privacy breaches, harmful clinical advice, misinterpretation of medical content, and hallucinations. Unlike generic AI systems, testing must account for high stakes, patient safety, and domain-specific failures.\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"Who should be involved in healthcare AI red teaming?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Effective red teams combine clinicians and AI engineers. Clinician expertise is crucial to spot unsafe or misleading outputs in clinical contexts, which may be missed by purely technical review.\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"What vulnerabilities are commonly uncovered in healthcare LLMs during red teaming?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Dynamic healthcare red-teaming has exposed high failure rates: despite models achieving over 80% MedQA accuracy, up to 94% fail robustness tests, 86% leak private information, 81% display bias, and 66% hallucinate in adversarial scenarios.\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"What frameworks support structured red teaming for clinical AI?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"The proposed PIEE framework offers a structured, multi-phase process for clinical AI red teaming\u2014designed to be accessible to both clinicians and informaticians, enabling collaboration without requiring deep AI expertise.\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"Why is dynamic, automated red teaming critical for healthcare AI?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Static benchmarks quickly become outdated and may miss real-world vulnerabilities. Dynamic, automated red-teaming\u2014using evolving adversarial agents\u2014continuously stress-tests systems for risks including privacy leaks, unfair bias, and hallucinations\u2014capturing emergent threats in real time.\"\n      }\n    }\n  ]\n}\n<\/script>\n","protected":false},"excerpt":{"rendered":"<p>Large language models (LLMs) hold immense promise for advancing clinical workflows, yet their deployment in healthcare raises critical safety, ethical, and bias-related concerns that exceed the scope of standard red\u2011teaming practices. In this talk, we first review the fundamentals of general\u2011purpose LLM red teaming\u2014targeting misinformation, offensive speech, security exploits, private\u2011data leakage, discrimination, prompt injection, and [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":1305,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"single-webinars.php","format":"standard","meta":{"_acf_changed":false,"nf_dc_page":"","content-type":"","inline_featured_image":false,"footnotes":""},"categories":[10,116],"tags":[],"class_list":["post-1304","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-video","category-webinars"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Healthcare-Specific Red Teaming - Pacific AI<\/title>\n<meta name=\"description\" content=\"Healthcare LLM red teaming webinar on ethics-based tests and bias detection methods to ensure safe and compliant AI in clinical workflows\" \/>\n<meta name=\"robots\" content=\"noindex, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Healthcare-Specific Red Teaming - Pacific AI\" \/>\n<meta property=\"og:description\" content=\"Healthcare LLM red teaming webinar on ethics-based tests and bias detection methods to ensure safe and compliant AI in clinical workflows\" \/>\n<meta property=\"og:url\" content=\"https:\/\/pacific.ai\/healthcare-specific-red-teaming\/\" \/>\n<meta property=\"og:site_name\" content=\"Pacific AI\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/people\/Pacific-AI\/61566807347567\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-07-25T09:44:28+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-02-19T08:41:20+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/pacific.ai\/wp-content\/uploads\/2025\/07\/Healthcare-SpecificRedTeaming.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"550\" \/>\n\t<meta property=\"og:image:height\" content=\"440\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"David Talby\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"David Talby\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/healthcare-specific-red-teaming\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/healthcare-specific-red-teaming\\\/\"},\"author\":{\"name\":\"David Talby\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#\\\/schema\\\/person\\\/8a2b4d5d75c8752d83ae6bb1d44e0186\"},\"headline\":\"Healthcare-Specific Red Teaming\",\"datePublished\":\"2025-07-25T09:44:28+00:00\",\"dateModified\":\"2026-02-19T08:41:20+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/healthcare-specific-red-teaming\\\/\"},\"wordCount\":457,\"publisher\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/healthcare-specific-red-teaming\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/Healthcare-SpecificRedTeaming.webp\",\"articleSection\":[\"Video\",\"Webinars\"],\"inLanguage\":\"en\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/healthcare-specific-red-teaming\\\/\",\"url\":\"https:\\\/\\\/pacific.ai\\\/healthcare-specific-red-teaming\\\/\",\"name\":\"Healthcare-Specific Red Teaming - Pacific AI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/healthcare-specific-red-teaming\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/healthcare-specific-red-teaming\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/Healthcare-SpecificRedTeaming.webp\",\"datePublished\":\"2025-07-25T09:44:28+00:00\",\"dateModified\":\"2026-02-19T08:41:20+00:00\",\"description\":\"Healthcare LLM red teaming webinar on ethics-based tests and bias detection methods to ensure safe and compliant AI in clinical workflows\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/healthcare-specific-red-teaming\\\/#breadcrumb\"},\"inLanguage\":\"en\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/pacific.ai\\\/healthcare-specific-red-teaming\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/healthcare-specific-red-teaming\\\/#primaryimage\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/Healthcare-SpecificRedTeaming.webp\",\"contentUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/Healthcare-SpecificRedTeaming.webp\",\"width\":550,\"height\":440,\"caption\":\"Healthcare-specific red teaming for medical generative AI applications, featuring Pacific AI CEO David Talby and highlighting AI safety testing, clinical risk evaluation, and responsible healthcare AI governance.\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/healthcare-specific-red-teaming\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/pacific.ai\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Healthcare-Specific Red Teaming\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#website\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/\",\"name\":\"Pacific AI\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#organization\",\"name\":\"Pacific AI\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/06\\\/site_logo.svg\",\"contentUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/06\\\/site_logo.svg\",\"width\":182,\"height\":41,\"caption\":\"Pacific AI\"},\"image\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/people\\\/Pacific-AI\\\/61566807347567\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/pacific-ai\\\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#\\\/schema\\\/person\\\/8a2b4d5d75c8752d83ae6bb1d44e0186\",\"name\":\"David Talby\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/03\\\/David_portret-96x96.webp\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/03\\\/David_portret-96x96.webp\",\"contentUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/03\\\/David_portret-96x96.webp\",\"caption\":\"David Talby\"},\"description\":\"David Talby is a CTO at Pacific AI, helping healthcare &amp; life science companies put AI to good use. David is the creator of Spark NLP \u2013 the world\u2019s most widely used natural language processing library in the enterprise. He has extensive experience building and running web-scale software platforms and teams \u2013 in startups, for Microsoft\u2019s Bing in the US and Europe, and to scale Amazon\u2019s financial systems in Seattle and the UK. David holds a PhD in computer science and master\u2019s degrees in both computer science and business administration.\",\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/davidtalby\\\/\"],\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/author\\\/david\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Healthcare-Specific Red Teaming - Pacific AI","description":"Healthcare LLM red teaming webinar on ethics-based tests and bias detection methods to ensure safe and compliant AI in clinical workflows","robots":{"index":"noindex","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"og_locale":"en_US","og_type":"article","og_title":"Healthcare-Specific Red Teaming - Pacific AI","og_description":"Healthcare LLM red teaming webinar on ethics-based tests and bias detection methods to ensure safe and compliant AI in clinical workflows","og_url":"https:\/\/pacific.ai\/healthcare-specific-red-teaming\/","og_site_name":"Pacific AI","article_publisher":"https:\/\/www.facebook.com\/people\/Pacific-AI\/61566807347567\/","article_published_time":"2025-07-25T09:44:28+00:00","article_modified_time":"2026-02-19T08:41:20+00:00","og_image":[{"width":550,"height":440,"url":"https:\/\/pacific.ai\/wp-content\/uploads\/2025\/07\/Healthcare-SpecificRedTeaming.webp","type":"image\/webp"}],"author":"David Talby","twitter_card":"summary_large_image","twitter_misc":{"Written by":"David Talby","Est. reading time":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/pacific.ai\/healthcare-specific-red-teaming\/#article","isPartOf":{"@id":"https:\/\/pacific.ai\/healthcare-specific-red-teaming\/"},"author":{"name":"David Talby","@id":"https:\/\/pacific.ai\/staging\/3667\/#\/schema\/person\/8a2b4d5d75c8752d83ae6bb1d44e0186"},"headline":"Healthcare-Specific Red Teaming","datePublished":"2025-07-25T09:44:28+00:00","dateModified":"2026-02-19T08:41:20+00:00","mainEntityOfPage":{"@id":"https:\/\/pacific.ai\/healthcare-specific-red-teaming\/"},"wordCount":457,"publisher":{"@id":"https:\/\/pacific.ai\/staging\/3667\/#organization"},"image":{"@id":"https:\/\/pacific.ai\/healthcare-specific-red-teaming\/#primaryimage"},"thumbnailUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/07\/Healthcare-SpecificRedTeaming.webp","articleSection":["Video","Webinars"],"inLanguage":"en"},{"@type":"WebPage","@id":"https:\/\/pacific.ai\/healthcare-specific-red-teaming\/","url":"https:\/\/pacific.ai\/healthcare-specific-red-teaming\/","name":"Healthcare-Specific Red Teaming - Pacific AI","isPartOf":{"@id":"https:\/\/pacific.ai\/staging\/3667\/#website"},"primaryImageOfPage":{"@id":"https:\/\/pacific.ai\/healthcare-specific-red-teaming\/#primaryimage"},"image":{"@id":"https:\/\/pacific.ai\/healthcare-specific-red-teaming\/#primaryimage"},"thumbnailUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/07\/Healthcare-SpecificRedTeaming.webp","datePublished":"2025-07-25T09:44:28+00:00","dateModified":"2026-02-19T08:41:20+00:00","description":"Healthcare LLM red teaming webinar on ethics-based tests and bias detection methods to ensure safe and compliant AI in clinical workflows","breadcrumb":{"@id":"https:\/\/pacific.ai\/healthcare-specific-red-teaming\/#breadcrumb"},"inLanguage":"en","potentialAction":[{"@type":"ReadAction","target":["https:\/\/pacific.ai\/healthcare-specific-red-teaming\/"]}]},{"@type":"ImageObject","inLanguage":"en","@id":"https:\/\/pacific.ai\/healthcare-specific-red-teaming\/#primaryimage","url":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/07\/Healthcare-SpecificRedTeaming.webp","contentUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/07\/Healthcare-SpecificRedTeaming.webp","width":550,"height":440,"caption":"Healthcare-specific red teaming for medical generative AI applications, featuring Pacific AI CEO David Talby and highlighting AI safety testing, clinical risk evaluation, and responsible healthcare AI governance."},{"@type":"BreadcrumbList","@id":"https:\/\/pacific.ai\/healthcare-specific-red-teaming\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/pacific.ai\/"},{"@type":"ListItem","position":2,"name":"Healthcare-Specific Red Teaming"}]},{"@type":"WebSite","@id":"https:\/\/pacific.ai\/staging\/3667\/#website","url":"https:\/\/pacific.ai\/staging\/3667\/","name":"Pacific AI","description":"","publisher":{"@id":"https:\/\/pacific.ai\/staging\/3667\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/pacific.ai\/staging\/3667\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en"},{"@type":"Organization","@id":"https:\/\/pacific.ai\/staging\/3667\/#organization","name":"Pacific AI","url":"https:\/\/pacific.ai\/staging\/3667\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/pacific.ai\/staging\/3667\/#\/schema\/logo\/image\/","url":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/06\/site_logo.svg","contentUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/06\/site_logo.svg","width":182,"height":41,"caption":"Pacific AI"},"image":{"@id":"https:\/\/pacific.ai\/staging\/3667\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/people\/Pacific-AI\/61566807347567\/","https:\/\/www.linkedin.com\/company\/pacific-ai\/"]},{"@type":"Person","@id":"https:\/\/pacific.ai\/staging\/3667\/#\/schema\/person\/8a2b4d5d75c8752d83ae6bb1d44e0186","name":"David Talby","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/03\/David_portret-96x96.webp","url":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/03\/David_portret-96x96.webp","contentUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/03\/David_portret-96x96.webp","caption":"David Talby"},"description":"David Talby is a CTO at Pacific AI, helping healthcare &amp; life science companies put AI to good use. David is the creator of Spark NLP \u2013 the world\u2019s most widely used natural language processing library in the enterprise. He has extensive experience building and running web-scale software platforms and teams \u2013 in startups, for Microsoft\u2019s Bing in the US and Europe, and to scale Amazon\u2019s financial systems in Seattle and the UK. David holds a PhD in computer science and master\u2019s degrees in both computer science and business administration.","sameAs":["https:\/\/www.linkedin.com\/in\/davidtalby\/"],"url":"https:\/\/pacific.ai\/staging\/3667\/author\/david\/"}]}},"_links":{"self":[{"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/posts\/1304","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/comments?post=1304"}],"version-history":[{"count":7,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/posts\/1304\/revisions"}],"predecessor-version":[{"id":2074,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/posts\/1304\/revisions\/2074"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/media\/1305"}],"wp:attachment":[{"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/media?parent=1304"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/categories?post=1304"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/tags?post=1304"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}