{"id":293,"date":"2024-11-18T09:52:27","date_gmt":"2024-11-18T09:52:27","guid":{"rendered":"https:\/\/pacific.ai\/staging\/3667\/?p=293"},"modified":"2026-02-19T11:23:36","modified_gmt":"2026-02-19T11:23:36","slug":"identifying-and-mitigating-bias-in-ai-models-for-recruiting","status":"publish","type":"post","link":"https:\/\/pacific.ai\/staging\/3667\/identifying-and-mitigating-bias-in-ai-models-for-recruiting\/","title":{"rendered":"Identifying and Mitigating Bias in AI Models for Recruiting"},"content":{"rendered":"<div id=\"bsf_rt_marker\"><\/div>\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"Identifying and Mitigating Bias in AI Models for Recruiting\" width=\"580\" height=\"326\" src=\"https:\/\/www.youtube.com\/embed\/QUjTItDvYCY?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<p>In today\u2019s landscape of AI-driven recruitment, candidate-job matching models play a pivotal role in enhancing the hiring process\u2019s efficiency and effectiveness. This necessitates rigorous evaluation to ensure fairness and equity.<\/p>\n\n\n\n<p>This talk will delve into using LangTest, a sophisticated testing framework, to rigorously assess and mitigate bias within such models.<\/p>\n\n\n\n<p>Featuring two expert speakers, the session will first explore the technical intricacies of the model, its architecture, underlying algorithms, and integration with LangTest to identify and address bias. Transitioning to business implications, we\u2019ll emphasize the importance of unbiased models and what is gained by leveraging AI in fostering diverse and inclusive workplaces.<\/p>\n\n\n\n<p>We\u2019ll highlight the risks of unaddressed bias, such as legal ramifications and reputational damage, alongside the strategic benefits of committing to consistent and fair talent evaluation practices. Attendees will gain a comprehensive understanding of both the technical and business aspects of ensuring unbiased AI in recruitment.<\/p>\n\n\n<h2>FAQ<\/h2>\n<p><strong>How is bias detected in AI-based recruiting tools?<\/strong><\/p>\n<p>Bias is detected through fairness testing on candidate data slices and clinical \u201cvignette\u201d comparisons\u2014for example, swapping demographic information to see if hiring outcomes change unfairly.<\/p>\n<p><strong>What types of bias commonly emerge in AI hiring systems?<\/strong><\/p>\n<p>Common biases include allocative bias (unequal access to interviews), representational bias (stereotyping), and performance bias (worse match accuracy for certain groups).<\/p>\n<p><strong>What practices help reduce bias in AI recruitment platforms?<\/strong><\/p>\n<p>Effective measures include: blind resume screening, diverse candidate training data, regular bias audits, human oversight, and continuous monitoring of hiring outcomes.<\/p>\n<p><strong>How effective are anonymization techniques in reducing hiring bias?<\/strong><\/p>\n<p>Studies show anonymizing identifiers like names, gender, and ethnicity can reduce bias\u2014Llama 3.1 showed lowest bias when anonymization was applied.<\/p>\n<p><strong>Who should be responsible for evaluating bias in AI recruiting tools?<\/strong><\/p>\n<p>Bias evaluation should be performed by both AI developers and HR teams using structured benchmarks (like LangTest or Aequitas) and feedback loops to ensure fairness across demographics.<\/p>\n\n\n<script type=\"application\/ld+json\">\n{\n  \"@context\": \"https:\/\/schema.org\",\n  \"@type\": \"FAQPage\",\n  \"mainEntity\": [\n    {\n      \"@type\": \"Question\",\n      \"name\": \"How is bias detected in AI-based recruiting tools?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Bias is detected through fairness testing on candidate data slices and clinical \u201cvignette\u201d comparisons\u2014for example, swapping demographic information to see if hiring outcomes change unfairly.\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"What types of bias commonly emerge in AI hiring systems?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Common biases include allocative bias (unequal access to interviews), representational bias (stereotyping), and performance bias (worse match accuracy for certain groups).\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"What practices help reduce bias in AI recruitment platforms?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Effective measures include: blind resume screening, diverse candidate training data, regular bias audits, human oversight, and continuous monitoring of hiring outcomes.\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"How effective are anonymization techniques in reducing hiring bias?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Studies show anonymizing identifiers like names, gender, and ethnicity can reduce bias\u2014Llama 3.1 showed lowest bias when anonymization was applied.\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"Who should be responsible for evaluating bias in AI recruiting tools?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Bias evaluation should be performed by both AI developers and HR teams using structured benchmarks (like LangTest or Aequitas) and feedback loops to ensure fairness across demographics.\"\n      }\n    }\n  ]\n}\n<\/script>\n","protected":false},"excerpt":{"rendered":"<p>In today\u2019s landscape of AI-driven recruitment, candidate-job matching models play a pivotal role in enhancing the hiring process\u2019s efficiency and effectiveness. This necessitates rigorous evaluation to ensure fairness and equity. This talk will delve into using LangTest, a sophisticated testing framework, to rigorously assess and mitigate bias within such models. Featuring two expert speakers, the [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":757,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"nf_dc_page":"","content-type":"","inline_featured_image":false,"footnotes":""},"categories":[119,10],"tags":[],"class_list":["post-293","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-case-studies","category-video"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Identifying and Mitigating Bias in AI Models for Recruiting - Pacific AI<\/title>\n<meta name=\"description\" content=\"Discover the video how LangTest helps detect and reduce bias in AI recruitment models, ensuring fair hiring and promoting diversity in the workplace\" \/>\n<meta name=\"robots\" content=\"noindex, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Identifying and Mitigating Bias in AI Models for Recruiting - Pacific AI\" \/>\n<meta property=\"og:description\" content=\"Discover the video how LangTest helps detect and reduce bias in AI recruitment models, ensuring fair hiring and promoting diversity in the workplace\" \/>\n<meta property=\"og:url\" content=\"https:\/\/pacific.ai\/identifying-and-mitigating-bias-in-ai-models-for-recruiting\/\" \/>\n<meta property=\"og:site_name\" content=\"Pacific AI\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/people\/Pacific-AI\/61566807347567\/\" \/>\n<meta property=\"article:published_time\" content=\"2024-11-18T09:52:27+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-02-19T11:23:36+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/pacific.ai\/wp-content\/uploads\/2024\/11\/web_4.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"550\" \/>\n\t<meta property=\"og:image:height\" content=\"440\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"David Talby\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"David Talby\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"2 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/identifying-and-mitigating-bias-in-ai-models-for-recruiting\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/identifying-and-mitigating-bias-in-ai-models-for-recruiting\\\/\"},\"author\":{\"name\":\"David Talby\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#\\\/schema\\\/person\\\/8a2b4d5d75c8752d83ae6bb1d44e0186\"},\"headline\":\"Identifying and Mitigating Bias in AI Models for Recruiting\",\"datePublished\":\"2024-11-18T09:52:27+00:00\",\"dateModified\":\"2026-02-19T11:23:36+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/identifying-and-mitigating-bias-in-ai-models-for-recruiting\\\/\"},\"wordCount\":331,\"publisher\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/identifying-and-mitigating-bias-in-ai-models-for-recruiting\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2024\\\/11\\\/web_4.webp\",\"articleSection\":[\"Case studies\",\"Video\"],\"inLanguage\":\"en\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/identifying-and-mitigating-bias-in-ai-models-for-recruiting\\\/\",\"url\":\"https:\\\/\\\/pacific.ai\\\/identifying-and-mitigating-bias-in-ai-models-for-recruiting\\\/\",\"name\":\"Identifying and Mitigating Bias in AI Models for Recruiting - Pacific AI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/identifying-and-mitigating-bias-in-ai-models-for-recruiting\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/identifying-and-mitigating-bias-in-ai-models-for-recruiting\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2024\\\/11\\\/web_4.webp\",\"datePublished\":\"2024-11-18T09:52:27+00:00\",\"dateModified\":\"2026-02-19T11:23:36+00:00\",\"description\":\"Discover the video how LangTest helps detect and reduce bias in AI recruitment models, ensuring fair hiring and promoting diversity in the workplace\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/identifying-and-mitigating-bias-in-ai-models-for-recruiting\\\/#breadcrumb\"},\"inLanguage\":\"en\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/pacific.ai\\\/identifying-and-mitigating-bias-in-ai-models-for-recruiting\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/identifying-and-mitigating-bias-in-ai-models-for-recruiting\\\/#primaryimage\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2024\\\/11\\\/web_4.webp\",\"contentUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2024\\\/11\\\/web_4.webp\",\"width\":550,\"height\":440,\"caption\":\"Identifying and mitigating bias in AI recruiting models, featuring data science and HR technology experts discussing fairness, transparency, and responsible AI practices in hiring and talent selection systems.\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/identifying-and-mitigating-bias-in-ai-models-for-recruiting\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/pacific.ai\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Identifying and Mitigating Bias in AI Models for Recruiting\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#website\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/\",\"name\":\"Pacific AI\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#organization\",\"name\":\"Pacific AI\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/06\\\/site_logo.svg\",\"contentUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/06\\\/site_logo.svg\",\"width\":182,\"height\":41,\"caption\":\"Pacific AI\"},\"image\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/people\\\/Pacific-AI\\\/61566807347567\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/pacific-ai\\\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#\\\/schema\\\/person\\\/8a2b4d5d75c8752d83ae6bb1d44e0186\",\"name\":\"David Talby\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/03\\\/David_portret-96x96.webp\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/03\\\/David_portret-96x96.webp\",\"contentUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/03\\\/David_portret-96x96.webp\",\"caption\":\"David Talby\"},\"description\":\"David Talby is a CTO at Pacific AI, helping healthcare &amp; life science companies put AI to good use. David is the creator of Spark NLP \u2013 the world\u2019s most widely used natural language processing library in the enterprise. He has extensive experience building and running web-scale software platforms and teams \u2013 in startups, for Microsoft\u2019s Bing in the US and Europe, and to scale Amazon\u2019s financial systems in Seattle and the UK. David holds a PhD in computer science and master\u2019s degrees in both computer science and business administration.\",\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/davidtalby\\\/\"],\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/author\\\/david\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Identifying and Mitigating Bias in AI Models for Recruiting - Pacific AI","description":"Discover the video how LangTest helps detect and reduce bias in AI recruitment models, ensuring fair hiring and promoting diversity in the workplace","robots":{"index":"noindex","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"og_locale":"en_US","og_type":"article","og_title":"Identifying and Mitigating Bias in AI Models for Recruiting - Pacific AI","og_description":"Discover the video how LangTest helps detect and reduce bias in AI recruitment models, ensuring fair hiring and promoting diversity in the workplace","og_url":"https:\/\/pacific.ai\/identifying-and-mitigating-bias-in-ai-models-for-recruiting\/","og_site_name":"Pacific AI","article_publisher":"https:\/\/www.facebook.com\/people\/Pacific-AI\/61566807347567\/","article_published_time":"2024-11-18T09:52:27+00:00","article_modified_time":"2026-02-19T11:23:36+00:00","og_image":[{"width":550,"height":440,"url":"https:\/\/pacific.ai\/wp-content\/uploads\/2024\/11\/web_4.webp","type":"image\/webp"}],"author":"David Talby","twitter_card":"summary_large_image","twitter_misc":{"Written by":"David Talby","Est. reading time":"2 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/pacific.ai\/identifying-and-mitigating-bias-in-ai-models-for-recruiting\/#article","isPartOf":{"@id":"https:\/\/pacific.ai\/identifying-and-mitigating-bias-in-ai-models-for-recruiting\/"},"author":{"name":"David Talby","@id":"https:\/\/pacific.ai\/staging\/3667\/#\/schema\/person\/8a2b4d5d75c8752d83ae6bb1d44e0186"},"headline":"Identifying and Mitigating Bias in AI Models for Recruiting","datePublished":"2024-11-18T09:52:27+00:00","dateModified":"2026-02-19T11:23:36+00:00","mainEntityOfPage":{"@id":"https:\/\/pacific.ai\/identifying-and-mitigating-bias-in-ai-models-for-recruiting\/"},"wordCount":331,"publisher":{"@id":"https:\/\/pacific.ai\/staging\/3667\/#organization"},"image":{"@id":"https:\/\/pacific.ai\/identifying-and-mitigating-bias-in-ai-models-for-recruiting\/#primaryimage"},"thumbnailUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2024\/11\/web_4.webp","articleSection":["Case studies","Video"],"inLanguage":"en"},{"@type":"WebPage","@id":"https:\/\/pacific.ai\/identifying-and-mitigating-bias-in-ai-models-for-recruiting\/","url":"https:\/\/pacific.ai\/identifying-and-mitigating-bias-in-ai-models-for-recruiting\/","name":"Identifying and Mitigating Bias in AI Models for Recruiting - Pacific AI","isPartOf":{"@id":"https:\/\/pacific.ai\/staging\/3667\/#website"},"primaryImageOfPage":{"@id":"https:\/\/pacific.ai\/identifying-and-mitigating-bias-in-ai-models-for-recruiting\/#primaryimage"},"image":{"@id":"https:\/\/pacific.ai\/identifying-and-mitigating-bias-in-ai-models-for-recruiting\/#primaryimage"},"thumbnailUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2024\/11\/web_4.webp","datePublished":"2024-11-18T09:52:27+00:00","dateModified":"2026-02-19T11:23:36+00:00","description":"Discover the video how LangTest helps detect and reduce bias in AI recruitment models, ensuring fair hiring and promoting diversity in the workplace","breadcrumb":{"@id":"https:\/\/pacific.ai\/identifying-and-mitigating-bias-in-ai-models-for-recruiting\/#breadcrumb"},"inLanguage":"en","potentialAction":[{"@type":"ReadAction","target":["https:\/\/pacific.ai\/identifying-and-mitigating-bias-in-ai-models-for-recruiting\/"]}]},{"@type":"ImageObject","inLanguage":"en","@id":"https:\/\/pacific.ai\/identifying-and-mitigating-bias-in-ai-models-for-recruiting\/#primaryimage","url":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2024\/11\/web_4.webp","contentUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2024\/11\/web_4.webp","width":550,"height":440,"caption":"Identifying and mitigating bias in AI recruiting models, featuring data science and HR technology experts discussing fairness, transparency, and responsible AI practices in hiring and talent selection systems."},{"@type":"BreadcrumbList","@id":"https:\/\/pacific.ai\/identifying-and-mitigating-bias-in-ai-models-for-recruiting\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/pacific.ai\/"},{"@type":"ListItem","position":2,"name":"Identifying and Mitigating Bias in AI Models for Recruiting"}]},{"@type":"WebSite","@id":"https:\/\/pacific.ai\/staging\/3667\/#website","url":"https:\/\/pacific.ai\/staging\/3667\/","name":"Pacific AI","description":"","publisher":{"@id":"https:\/\/pacific.ai\/staging\/3667\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/pacific.ai\/staging\/3667\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en"},{"@type":"Organization","@id":"https:\/\/pacific.ai\/staging\/3667\/#organization","name":"Pacific AI","url":"https:\/\/pacific.ai\/staging\/3667\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/pacific.ai\/staging\/3667\/#\/schema\/logo\/image\/","url":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/06\/site_logo.svg","contentUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/06\/site_logo.svg","width":182,"height":41,"caption":"Pacific AI"},"image":{"@id":"https:\/\/pacific.ai\/staging\/3667\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/people\/Pacific-AI\/61566807347567\/","https:\/\/www.linkedin.com\/company\/pacific-ai\/"]},{"@type":"Person","@id":"https:\/\/pacific.ai\/staging\/3667\/#\/schema\/person\/8a2b4d5d75c8752d83ae6bb1d44e0186","name":"David Talby","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/03\/David_portret-96x96.webp","url":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/03\/David_portret-96x96.webp","contentUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/03\/David_portret-96x96.webp","caption":"David Talby"},"description":"David Talby is a CTO at Pacific AI, helping healthcare &amp; life science companies put AI to good use. David is the creator of Spark NLP \u2013 the world\u2019s most widely used natural language processing library in the enterprise. He has extensive experience building and running web-scale software platforms and teams \u2013 in startups, for Microsoft\u2019s Bing in the US and Europe, and to scale Amazon\u2019s financial systems in Seattle and the UK. David holds a PhD in computer science and master\u2019s degrees in both computer science and business administration.","sameAs":["https:\/\/www.linkedin.com\/in\/davidtalby\/"],"url":"https:\/\/pacific.ai\/staging\/3667\/author\/david\/"}]}},"_links":{"self":[{"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/posts\/293","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/comments?post=293"}],"version-history":[{"count":10,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/posts\/293\/revisions"}],"predecessor-version":[{"id":2056,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/posts\/293\/revisions\/2056"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/media\/757"}],"wp:attachment":[{"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/media?parent=293"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/categories?post=293"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/tags?post=293"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}