{"id":281,"date":"2024-11-15T16:45:38","date_gmt":"2024-11-15T16:45:38","guid":{"rendered":"https:\/\/pacific.ai\/staging\/3667\/?p=281"},"modified":"2026-02-19T11:28:51","modified_gmt":"2026-02-19T11:28:51","slug":"automated-testing-of-bias-fairness-and-robustness","status":"publish","type":"post","link":"https:\/\/pacific.ai\/staging\/3667\/automated-testing-of-bias-fairness-and-robustness\/","title":{"rendered":"Automated Testing of Bias, Fairness, and Robustness of Generative AI Solutions"},"content":{"rendered":"<div id=\"bsf_rt_marker\"><\/div>\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"Automated Testing of Bias, Fairness, and Robustness of Generative AI Solutions\" width=\"580\" height=\"326\" src=\"https:\/\/www.youtube.com\/embed\/e2KrIgI17KE?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<p>Current US legislation prohibits AI applications in recruiting, healthcare, and advertising from discrimination and bias.<\/p>\n\n\n\n<p>This requires organizations who deploy such systems to test and prove that their solutions are robust and unbiased \u2013 in the same way that they\u2019re required to comply with security and privacy regulations. This session introduces Pacific AI, a no-code tool built on top of the LangTest library, which applies Generative AI to:<\/p>\n\n\n\n<ul>\n  <li> Automatically generate tests for accuracy, robustness, bias, and fairness for text classification and entity recognition tasks<\/li>\n  <li>Automatically run test suite, create detailed model report cards, and compare different models against the same test suite<\/li>\n  <li>Publish, share, and reuse AI test suites across teams and projects<\/li>\n  <li>Automatically generate synthetic training data to augment model training and minimize common model bias and reliability issues<\/li>\n<\/ul>\n\n\n\n<p>This session then presents how John Snow Labs uses Pacific AI to test and improve its own healthcare-specific language models.<\/p>\n\n\n<h2>FAQ<\/h2>\n<p><strong>What can automated governance tools test in generative AI systems?<\/strong><\/p>\n<p>They can evaluate accuracy, robustness (e.g., typo tolerance), bias, and fairness for tasks like text classification and entity recognition using predefined or custom test suites.<\/p>\n<p><strong>How do tools generate test cases for bias and fairness automatically?<\/strong><\/p>\n<p>Generative AI generates synthetic variants (e.g., names, demographic profiles, adversarial prompts), enabling coverage of sensitive attributes like ethnicity or age for extensive bias testing.<\/p>\n<p><strong>Can you compare model versions using automated test suites?<\/strong><\/p>\n<p>Yes\u2014these tools produce detailed report cards and support side-by-side model comparison on standardized test suites, tracking performance changes over time.<\/p>\n<p><strong>How is accuracy and robustness evaluated in non-technical terms?<\/strong><\/p>\n<p>Tests simulate noisy inputs (e.g., typos, paraphrasing) and assess if model outputs remain correct or consistent, providing pass\/fail assessments for clarity.<\/p>\n<p><strong>What benefits does automated testing bring to domain experts?<\/strong><\/p>\n<p>Domain specialists can create, run, and share tests\u2014without coding\u2014ensuring models in sensitive fields (like healthcare, recruiting) are compliant with fairness, bias mitigation, and legal standards.<\/p>\n\n\n<script type=\"application\/ld+json\">\n{\n  \"@context\": \"https:\/\/schema.org\",\n  \"@type\": \"FAQPage\",\n  \"mainEntity\": [\n    {\n      \"@type\": \"Question\",\n      \"name\": \"What can automated governance tools test in generative AI systems?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"They can evaluate accuracy, robustness (e.g., typo tolerance), bias, and fairness for tasks like text classification and entity recognition using predefined or custom test suites.\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"How do tools generate test cases for bias and fairness automatically?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Generative AI generates synthetic variants (e.g., names, demographic profiles, adversarial prompts), enabling coverage of sensitive attributes like ethnicity or age for extensive bias testing.\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"Can you compare model versions using automated test suites?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Yes\u2014these tools produce detailed report cards and support side-by-side model comparison on standardized test suites, tracking performance changes over time.\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"How is accuracy and robustness evaluated in non-technical terms?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Tests simulate noisy inputs (e.g., typos, paraphrasing) and assess if model outputs remain correct or consistent, providing pass\/fail assessments for clarity.\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"What benefits does automated testing bring to domain experts?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Domain specialists can create, run, and share tests\u2014without coding\u2014ensuring models in sensitive fields (like healthcare, recruiting) are compliant with fairness, bias mitigation, and legal standards.\"\n      }\n    }\n  ]\n}\n<\/script>\n","protected":false},"excerpt":{"rendered":"<p>Current US legislation prohibits AI applications in recruiting, healthcare, and advertising from discrimination and bias. This requires organizations who deploy such systems to test and prove that their solutions are robust and unbiased \u2013 in the same way that they\u2019re required to comply with security and privacy regulations. This session introduces Pacific AI, a no-code [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":760,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"nf_dc_page":"","content-type":"","inline_featured_image":false,"footnotes":""},"categories":[10,116],"tags":[],"class_list":["post-281","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-video","category-webinars"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Automated Testing of Bias, Fairness, and Robustness of Generative AI Solutions - Pacific AI<\/title>\n<meta name=\"description\" content=\"Free seminar recording - see how Pacific AI uses Generative AI to eliminate bias and ensure compliance in healthcare, recruiting, and advertising NLP\" \/>\n<meta name=\"robots\" content=\"noindex, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Automated Testing of Bias, Fairness, and Robustness of Generative AI Solutions - Pacific AI\" \/>\n<meta property=\"og:description\" content=\"Free seminar recording - see how Pacific AI uses Generative AI to eliminate bias and ensure compliance in healthcare, recruiting, and advertising NLP\" \/>\n<meta property=\"og:url\" content=\"https:\/\/pacific.ai\/automated-testing-of-bias-fairness-and-robustness\/\" \/>\n<meta property=\"og:site_name\" content=\"Pacific AI\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/people\/Pacific-AI\/61566807347567\/\" \/>\n<meta property=\"article:published_time\" content=\"2024-11-15T16:45:38+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-02-19T11:28:51+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/pacific.ai\/wp-content\/uploads\/2024\/11\/web_2.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"550\" \/>\n\t<meta property=\"og:image:height\" content=\"440\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"David Talby\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"David Talby\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"2 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/automated-testing-of-bias-fairness-and-robustness\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/automated-testing-of-bias-fairness-and-robustness\\\/\"},\"author\":{\"name\":\"David Talby\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#\\\/schema\\\/person\\\/8a2b4d5d75c8752d83ae6bb1d44e0186\"},\"headline\":\"Automated Testing of Bias, Fairness, and Robustness of Generative AI Solutions\",\"datePublished\":\"2024-11-15T16:45:38+00:00\",\"dateModified\":\"2026-02-19T11:28:51+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/automated-testing-of-bias-fairness-and-robustness\\\/\"},\"wordCount\":341,\"publisher\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/automated-testing-of-bias-fairness-and-robustness\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2024\\\/11\\\/web_2.webp\",\"articleSection\":[\"Video\",\"Webinars\"],\"inLanguage\":\"en\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/automated-testing-of-bias-fairness-and-robustness\\\/\",\"url\":\"https:\\\/\\\/pacific.ai\\\/automated-testing-of-bias-fairness-and-robustness\\\/\",\"name\":\"Automated Testing of Bias, Fairness, and Robustness of Generative AI Solutions - Pacific AI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/automated-testing-of-bias-fairness-and-robustness\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/automated-testing-of-bias-fairness-and-robustness\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2024\\\/11\\\/web_2.webp\",\"datePublished\":\"2024-11-15T16:45:38+00:00\",\"dateModified\":\"2026-02-19T11:28:51+00:00\",\"description\":\"Free seminar recording - see how Pacific AI uses Generative AI to eliminate bias and ensure compliance in healthcare, recruiting, and advertising NLP\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/automated-testing-of-bias-fairness-and-robustness\\\/#breadcrumb\"},\"inLanguage\":\"en\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/pacific.ai\\\/automated-testing-of-bias-fairness-and-robustness\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/automated-testing-of-bias-fairness-and-robustness\\\/#primaryimage\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2024\\\/11\\\/web_2.webp\",\"contentUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2024\\\/11\\\/web_2.webp\",\"width\":550,\"height\":440,\"caption\":\"Automated testing of bias, fairness, and robustness in generative AI solutions, highlighting responsible AI evaluation with expert insights on model reliability, risk detection, and governance-ready validation.\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/automated-testing-of-bias-fairness-and-robustness\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/pacific.ai\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Automated Testing of Bias, Fairness, and Robustness of Generative AI Solutions\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#website\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/\",\"name\":\"Pacific AI\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#organization\",\"name\":\"Pacific AI\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/06\\\/site_logo.svg\",\"contentUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/06\\\/site_logo.svg\",\"width\":182,\"height\":41,\"caption\":\"Pacific AI\"},\"image\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/people\\\/Pacific-AI\\\/61566807347567\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/pacific-ai\\\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#\\\/schema\\\/person\\\/8a2b4d5d75c8752d83ae6bb1d44e0186\",\"name\":\"David Talby\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/03\\\/David_portret-96x96.webp\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/03\\\/David_portret-96x96.webp\",\"contentUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/03\\\/David_portret-96x96.webp\",\"caption\":\"David Talby\"},\"description\":\"David Talby is a CTO at Pacific AI, helping healthcare &amp; life science companies put AI to good use. David is the creator of Spark NLP \u2013 the world\u2019s most widely used natural language processing library in the enterprise. He has extensive experience building and running web-scale software platforms and teams \u2013 in startups, for Microsoft\u2019s Bing in the US and Europe, and to scale Amazon\u2019s financial systems in Seattle and the UK. David holds a PhD in computer science and master\u2019s degrees in both computer science and business administration.\",\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/davidtalby\\\/\"],\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/author\\\/david\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Automated Testing of Bias, Fairness, and Robustness of Generative AI Solutions - Pacific AI","description":"Free seminar recording - see how Pacific AI uses Generative AI to eliminate bias and ensure compliance in healthcare, recruiting, and advertising NLP","robots":{"index":"noindex","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"og_locale":"en_US","og_type":"article","og_title":"Automated Testing of Bias, Fairness, and Robustness of Generative AI Solutions - Pacific AI","og_description":"Free seminar recording - see how Pacific AI uses Generative AI to eliminate bias and ensure compliance in healthcare, recruiting, and advertising NLP","og_url":"https:\/\/pacific.ai\/automated-testing-of-bias-fairness-and-robustness\/","og_site_name":"Pacific AI","article_publisher":"https:\/\/www.facebook.com\/people\/Pacific-AI\/61566807347567\/","article_published_time":"2024-11-15T16:45:38+00:00","article_modified_time":"2026-02-19T11:28:51+00:00","og_image":[{"width":550,"height":440,"url":"https:\/\/pacific.ai\/wp-content\/uploads\/2024\/11\/web_2.webp","type":"image\/webp"}],"author":"David Talby","twitter_card":"summary_large_image","twitter_misc":{"Written by":"David Talby","Est. reading time":"2 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/pacific.ai\/automated-testing-of-bias-fairness-and-robustness\/#article","isPartOf":{"@id":"https:\/\/pacific.ai\/automated-testing-of-bias-fairness-and-robustness\/"},"author":{"name":"David Talby","@id":"https:\/\/pacific.ai\/staging\/3667\/#\/schema\/person\/8a2b4d5d75c8752d83ae6bb1d44e0186"},"headline":"Automated Testing of Bias, Fairness, and Robustness of Generative AI Solutions","datePublished":"2024-11-15T16:45:38+00:00","dateModified":"2026-02-19T11:28:51+00:00","mainEntityOfPage":{"@id":"https:\/\/pacific.ai\/automated-testing-of-bias-fairness-and-robustness\/"},"wordCount":341,"publisher":{"@id":"https:\/\/pacific.ai\/staging\/3667\/#organization"},"image":{"@id":"https:\/\/pacific.ai\/automated-testing-of-bias-fairness-and-robustness\/#primaryimage"},"thumbnailUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2024\/11\/web_2.webp","articleSection":["Video","Webinars"],"inLanguage":"en"},{"@type":"WebPage","@id":"https:\/\/pacific.ai\/automated-testing-of-bias-fairness-and-robustness\/","url":"https:\/\/pacific.ai\/automated-testing-of-bias-fairness-and-robustness\/","name":"Automated Testing of Bias, Fairness, and Robustness of Generative AI Solutions - Pacific AI","isPartOf":{"@id":"https:\/\/pacific.ai\/staging\/3667\/#website"},"primaryImageOfPage":{"@id":"https:\/\/pacific.ai\/automated-testing-of-bias-fairness-and-robustness\/#primaryimage"},"image":{"@id":"https:\/\/pacific.ai\/automated-testing-of-bias-fairness-and-robustness\/#primaryimage"},"thumbnailUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2024\/11\/web_2.webp","datePublished":"2024-11-15T16:45:38+00:00","dateModified":"2026-02-19T11:28:51+00:00","description":"Free seminar recording - see how Pacific AI uses Generative AI to eliminate bias and ensure compliance in healthcare, recruiting, and advertising NLP","breadcrumb":{"@id":"https:\/\/pacific.ai\/automated-testing-of-bias-fairness-and-robustness\/#breadcrumb"},"inLanguage":"en","potentialAction":[{"@type":"ReadAction","target":["https:\/\/pacific.ai\/automated-testing-of-bias-fairness-and-robustness\/"]}]},{"@type":"ImageObject","inLanguage":"en","@id":"https:\/\/pacific.ai\/automated-testing-of-bias-fairness-and-robustness\/#primaryimage","url":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2024\/11\/web_2.webp","contentUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2024\/11\/web_2.webp","width":550,"height":440,"caption":"Automated testing of bias, fairness, and robustness in generative AI solutions, highlighting responsible AI evaluation with expert insights on model reliability, risk detection, and governance-ready validation."},{"@type":"BreadcrumbList","@id":"https:\/\/pacific.ai\/automated-testing-of-bias-fairness-and-robustness\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/pacific.ai\/"},{"@type":"ListItem","position":2,"name":"Automated Testing of Bias, Fairness, and Robustness of Generative AI Solutions"}]},{"@type":"WebSite","@id":"https:\/\/pacific.ai\/staging\/3667\/#website","url":"https:\/\/pacific.ai\/staging\/3667\/","name":"Pacific AI","description":"","publisher":{"@id":"https:\/\/pacific.ai\/staging\/3667\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/pacific.ai\/staging\/3667\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en"},{"@type":"Organization","@id":"https:\/\/pacific.ai\/staging\/3667\/#organization","name":"Pacific AI","url":"https:\/\/pacific.ai\/staging\/3667\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/pacific.ai\/staging\/3667\/#\/schema\/logo\/image\/","url":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/06\/site_logo.svg","contentUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/06\/site_logo.svg","width":182,"height":41,"caption":"Pacific AI"},"image":{"@id":"https:\/\/pacific.ai\/staging\/3667\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/people\/Pacific-AI\/61566807347567\/","https:\/\/www.linkedin.com\/company\/pacific-ai\/"]},{"@type":"Person","@id":"https:\/\/pacific.ai\/staging\/3667\/#\/schema\/person\/8a2b4d5d75c8752d83ae6bb1d44e0186","name":"David Talby","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/03\/David_portret-96x96.webp","url":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/03\/David_portret-96x96.webp","contentUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/03\/David_portret-96x96.webp","caption":"David Talby"},"description":"David Talby is a CTO at Pacific AI, helping healthcare &amp; life science companies put AI to good use. David is the creator of Spark NLP \u2013 the world\u2019s most widely used natural language processing library in the enterprise. He has extensive experience building and running web-scale software platforms and teams \u2013 in startups, for Microsoft\u2019s Bing in the US and Europe, and to scale Amazon\u2019s financial systems in Seattle and the UK. David holds a PhD in computer science and master\u2019s degrees in both computer science and business administration.","sameAs":["https:\/\/www.linkedin.com\/in\/davidtalby\/"],"url":"https:\/\/pacific.ai\/staging\/3667\/author\/david\/"}]}},"_links":{"self":[{"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/posts\/281","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/comments?post=281"}],"version-history":[{"count":10,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/posts\/281\/revisions"}],"predecessor-version":[{"id":2057,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/posts\/281\/revisions\/2057"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/media\/760"}],"wp:attachment":[{"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/media?parent=281"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/categories?post=281"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/tags?post=281"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}