{"id":959,"date":"2025-06-02T11:15:58","date_gmt":"2025-06-02T11:15:58","guid":{"rendered":"https:\/\/pacific.ai\/staging\/3667\/?p=959"},"modified":"2026-02-19T09:31:26","modified_gmt":"2026-02-19T09:31:26","slug":"how-the-pacific-ai-governance-policy-suite-supports-compliance-with-aca-section-1557","status":"publish","type":"post","link":"https:\/\/pacific.ai\/staging\/3667\/how-the-pacific-ai-governance-policy-suite-supports-compliance-with-aca-section-1557\/","title":{"rendered":"How the Pacific AI Governance Policy Suite Supports Compliance with ACA Section 1557"},"content":{"rendered":"<div id=\"bsf_rt_marker\"><\/div><p>Artificial intelligence is increasingly used in healthcare systems, from clinical decision support tools to patient engagement platforms and insurance claims processing. While AI can improve efficiency and quality, it also brings serious risks: automated systems can unintentionally discriminate against patients based on race, ethnicity, sex, language, age, or disability. That\u2019s why any AI system used in healthcare must be aligned with <strong>Section 1557 of the Affordable Care Act (ACA)<\/strong> \u2014 a critical U.S. law that prohibits discrimination in health programs and activities.<\/p>\n<p>In this blog post, we introduce Section 1557 and explain how the <a title=\"AI policies\" href=\"https:\/\/pacific.ai\/staging\/3667\/ai-policies\/\"><strong>Pacific AI Governance Policy Suite<\/strong><\/a> provides the policies and controls organizations need to demonstrate compliance. We include detailed examples of how Section 1557 applies to AI, and offer a table mapping each regulatory requirement to the relevant Pacific AI policy and clause.<\/p>\n<h2>What Is Section 1557 of the ACA?<\/h2>\n<p>Section 1557 is the non-discrimination provision of the Affordable Care Act. It applies to any health program or activity that receives federal financial assistance\u2014such as Medicare, Medicaid, or funding from the Department of Health and Human Services (HHS).<br \/>\nIt prohibits discrimination on the basis of:<\/p>\n<ul>\n<li>Race<\/li>\n<li>Color<\/li>\n<li>National origin (including language access)<\/li>\n<li>Sex (including sexual orientation and gender identity)<\/li>\n<li>Age<\/li>\n<li>Disability<\/li>\n<\/ul>\n<p>The regulation was strengthened in 2024 to explicitly include digital systems and automated decision-making tools. That means AI used in clinical, operational, or administrative healthcare settings must not create or worsen disparities in access, quality, or outcomes.<\/p>\n<p>Examples of where AI can run afoul of Section 1557 include systems that unintentionally exclude or misjudge certain patients due to biased training data or poor interface design:<\/p>\n<ul>\n<li>A triage chatbot trained on biased data that under-prioritizes Black or Latino patients<\/li>\n<li>An appointment scheduling system that doesn\u2019t work with screen readers<\/li>\n<li>An insurance eligibility algorithm that penalizes patients with non-English language preferences<\/li>\n<\/ul>\n<p>There have already been several high-profile examples where healthcare companies or their technology vendors faced serious consequences for violating Section 1557 or similar anti-discrimination laws. For instance, in 2020, UnitedHealthcare faced scrutiny after an investigation found that an algorithm it used to allocate care coordination resources disproportionately favored white patients over Black patients, even when both had the same level of need.<\/p>\n<p>The biased algorithm led to unequal access to care, prompting lawsuits and renewed federal oversight. In another case, a large hospital system implemented a patient portal that was not compatible with screen readers, effectively excluding blind patients from accessing their health records\u2014a violation of disability access rules under Section 504 and Section 1557. These incidents illustrate that non-compliance isn\u2019t just a theoretical risk: it can lead to lawsuits, regulatory penalties, and reputational damage.<\/p>\n<p>The <strong>Pacific AI Governance Policy Suite<\/strong> is a free, open-source set of AI policies designed to help organizations align with U.S. laws and ethical frameworks. Updated quarterly, the suite includes specific controls that support:<\/p>\n<ul>\n<li>Fairness<\/li>\n<li>Accessibility<\/li>\n<li>Risk mitigation<\/li>\n<li>Transparency<\/li>\n<li>Documentation<\/li>\n<\/ul>\n<p>Here\u2019s how each ACA 1557 requirement maps to the Pacific AI suite:<\/p>\n<table class=\"table1\">\n<thead>\n<tr>\n<th>ACA 1557 Requirement<\/th>\n<th>Pacific AI Policy<\/th>\n<th>Clause<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>Prevent racial and ethnic discrimination in outcomes<\/td>\n<td>AI Fairness Policy<\/td>\n<td>\u00a76.2, \u00a76.3<\/td>\n<\/tr>\n<tr>\n<td>Support for language access in AI interfaces and outputs<\/td>\n<td>AI Transparency Policy<\/td>\n<td>\u00a76.3<\/td>\n<\/tr>\n<tr>\n<td>Non-discrimination based on sex, gender identity, or sexual orientation<\/td>\n<td>AI Fairness Policy<\/td>\n<td>\u00a74.1, \u00a76.2<\/td>\n<\/tr>\n<tr>\n<td>Accessibility for people with disabilities<\/td>\n<td>AI Safety Policy<\/td>\n<td>\u00a75; AI Transparency Policy &#8211; \u00a74<\/td>\n<\/tr>\n<tr>\n<td>Avoiding age-related bias in models and data<\/td>\n<td>AI Fairness Policy<\/td>\n<td>\u00a75.3<\/td>\n<\/tr>\n<tr>\n<td>Inclusive design and usability testing<\/td>\n<td>AI System Lifecycle Policy<\/td>\n<td>\u00a74, \u00a77.3<\/td>\n<\/tr>\n<tr>\n<td>Regular audits for bias and fairness<\/td>\n<td>AI Fairness Policy<\/td>\n<td>\u00a76; AI Risk Management Policy &#8211; \u00a75<\/td>\n<\/tr>\n<tr>\n<td>Documented human oversight and appeal pathways<\/td>\n<td>AI Safety Potdcy<\/td>\n<td>\u00a74.1, \u00a77; AI Transparency Potdcy &#8211; \u00a76.1<\/td>\n<\/tr>\n<tr>\n<td>Risk classification for high-impact use cases<\/td>\n<td><a title=\"ai risk management\" href=\"https:\/\/pacific.ai\/staging\/3667\/ai-risk-management-audit\/\">AI Risk Management<\/a> Potdcy<\/td>\n<td>\u00a74, \u00a76<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h2>Detailed Example: Fairness in Clinical AI Tools<\/h2>\n<p>Imagine a hospital uses an AI tool to predict which patients are at risk for complications after surgery. If the model was trained on data that under-represents patients from certain racial or socioeconomic backgrounds, it may give less accurate predictions for those patients. This could lead to unequal access to post-surgical care or preventative interventions\u2014an outcome that directly violates Section 1557.<\/p>\n<p>The Pacific AI suite helps mitigate this by requiring:<\/p>\n<ul>\n<li>Fairness <a title=\"Guardian: 360\u00b0 Testing &amp; Monitoring for Generative AI Systems\" href=\"https:\/\/pacific.ai\/staging\/3667\/guardian\/\">testing<\/a> disaggregated by race, gender, and language (AI Fairness Policy \u00a76.2)<\/li>\n<li>Documentation of training data and audit results (AI System Lifecycle Policy \u00a77.1)<\/li>\n<li>Human review before deployment in clinical settings (AI Safety Policy \u00a74.1)<\/li>\n<\/ul>\n<p>These requirements create a repeatable framework for equitable design and use of AI in healthcare.<\/p>\n<h2>Conclusion<\/h2>\n<p>AI systems used in healthcare must be designed not only for accuracy and efficiency\u2014but also for equity. Section 1557 of the ACA sets a clear legal expectation that no patient should be excluded, harmed, or disadvantaged by AI based on race, language, gender identity, age, or disability. As AI technologies become more deeply embedded in care delivery, payment systems, and patient communication tools, the risks of discrimination will only increase\u2014especially when these systems are opaque or trained on biased data.<\/p>\n<p>Organizations cannot treat compliance with ACA 1557 as a one-time review or checklist exercise. Instead, they must take a systematic, policy-driven approach that embeds fairness, accessibility, and transparency into every stage of AI system development and deployment. This is where the Pacific AI Governance Policy Suite provides tremendous value. It translates legal obligations into operational procedures, role-based responsibilities, and documented <a title=\"What is AI auditing\" href=\"https:\/\/pacific.ai\/staging\/3667\/what-is-a-responsible-ai-audit\/\">audit trails<\/a> that support both proactive prevention and responsive mitigation.<\/p>\n<p>Adopting the Pacific AI suite not only helps organizations align with ACA 1557, it also improves internal accountability and public trust. It enables healthcare providers, payers, and health tech companies to demonstrate their commitment to equitable AI\u2014not just in words, but in policy and practice. By doing so, they create safer, more inclusive healthcare systems that serve the full diversity of their patient populations.<\/p>\n<p><strong>Download the full Pacific AI suite at <a href=\"https:\/\/pacific.ai\/staging\/3667\">https:\/\/pacific.ai\/staging\/3667<\/a><\/strong><\/p>\n<p><strong>Need help mapping your AI systems to ACA 1557? Contact <a href=\"mailto:info@pacific.ai\">info@pacific.ai<\/a><\/strong><\/p>\n<h2>FAQ<\/h2>\n<p><strong>What specific obligations do covered entities have under Section 1557 when using AI?<\/strong><\/p>\n<p>Health systems must identify AI-based decision support tools and make \u201creasonable efforts\u201d to assess and mitigate bias, especially if algorithms include proxies for protected characteristics like race or disability .<\/p>\n<p><strong>How does the Pacific AI Governance Policy Suite support compliance with Section 1557?<\/strong><\/p>\n<p>The suite provides an AI Fairness and Risk Management policy, audit workflows for AI bias detection, and bias mitigation procedures\u2014all mapped to obligations under Section 1557 for proactive identification and prevention of discriminatory impacts .<\/p>\n<p><strong>Why is it important to address proxy discrimination in Section 1557 compliance?<\/strong><\/p>\n<p>Section 1557 prohibits discrimination based on race, etc., including when algorithms use proxy variables (like ZIP code) that indirectly perpetuate bias. Covered entities must analyze AI tools to prevent these hidden forms of discrimination .<\/p>\n<p><strong>What are the consequences of non-compliance with Section 1557 for AI systems?<\/strong><\/p>\n<p>Violations can lead to administrative enforcement actions by HHS OCR, including remediation plans and potential withdrawal of federal funding. Entities also must have grievance policies and designated compliance officers as part of enforcement protocols .<\/p>\n\n\n<script type=\"application\/ld+json\">\n{\n  \"@context\": \"https:\/\/schema.org\",\n  \"@type\": \"FAQPage\",\n  \"mainEntity\": [\n    {\n      \"@type\": \"Question\",\n      \"name\": \"What specific obligations do covered entities have under Section 1557 when using AI?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Health systems must identify AI-based decision support tools and make \u201creasonable efforts\u201d to assess and mitigate bias, especially if algorithms include proxies for protected characteristics like race or disability .\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"How does the Pacific AI Governance Policy Suite support compliance with Section 1557?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"The suite provides an AI Fairness and Risk Management policy, audit workflows for AI bias detection, and bias mitigation procedures\u2014all mapped to obligations under Section 1557 for proactive identification and prevention of discriminatory impacts .\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"Why is it important to address proxy discrimination in Section 1557 compliance?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Section 1557 prohibits discrimination based on race, etc., including when algorithms use proxy variables (like ZIP code) that indirectly perpetuate bias. Covered entities must analyze AI tools to prevent these hidden forms of discrimination .\"\n      }\n    },\n    {\n      \"@type\": \"Question\",\n      \"name\": \"What are the consequences of non-compliance with Section 1557 for AI systems?\",\n      \"acceptedAnswer\": {\n        \"@type\": \"Answer\",\n        \"text\": \"Violations can lead to administrative enforcement actions by HHS OCR, including remediation plans and potential withdrawal of federal funding. Entities also must have grievance policies and designated compliance officers as part of enforcement protocols.\"\n      }\n    }\n  ]\n}\n<\/script>\n","protected":false},"excerpt":{"rendered":"<p>Artificial intelligence is increasingly used in healthcare systems, from clinical decision support tools to patient engagement platforms and insurance claims processing. While AI can improve efficiency and quality, it also brings serious risks: automated systems can unintentionally discriminate against patients based on race, ethnicity, sex, language, age, or disability. That\u2019s why any AI system used [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":961,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"nf_dc_page":"","content-type":"","inline_featured_image":false,"footnotes":""},"categories":[118],"tags":[],"class_list":["post-959","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-articles"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>How the Pacific AI Governance Policy Suite Supports Compliance with ACA Section 1557 - Pacific AI<\/title>\n<meta name=\"description\" content=\"Pacific AI\u2019s Governance Policy Suite helps organizations comply with ACA Section 1557 by promoting nondiscrimination and equity in AI system design and use.\" \/>\n<meta name=\"robots\" content=\"noindex, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"How the Pacific AI Governance Policy Suite Supports Compliance with ACA Section 1557 - Pacific AI\" \/>\n<meta property=\"og:description\" content=\"Pacific AI\u2019s Governance Policy Suite helps organizations comply with ACA Section 1557 by promoting nondiscrimination and equity in AI system design and use.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/pacific.ai\/how-the-pacific-ai-governance-policy-suite-supports-compliance-with-aca-section-1557\/\" \/>\n<meta property=\"og:site_name\" content=\"Pacific AI\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/people\/Pacific-AI\/61566807347567\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-06-02T11:15:58+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-02-19T09:31:26+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/pacific.ai\/wp-content\/uploads\/2025\/06\/ACA_Section1557.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"550\" \/>\n\t<meta property=\"og:image:height\" content=\"440\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"David Talby\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"David Talby\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/how-the-pacific-ai-governance-policy-suite-supports-compliance-with-aca-section-1557\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/how-the-pacific-ai-governance-policy-suite-supports-compliance-with-aca-section-1557\\\/\"},\"author\":{\"name\":\"David Talby\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#\\\/schema\\\/person\\\/8a2b4d5d75c8752d83ae6bb1d44e0186\"},\"headline\":\"How the Pacific AI Governance Policy Suite Supports Compliance with ACA Section 1557\",\"datePublished\":\"2025-06-02T11:15:58+00:00\",\"dateModified\":\"2026-02-19T09:31:26+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/how-the-pacific-ai-governance-policy-suite-supports-compliance-with-aca-section-1557\\\/\"},\"wordCount\":1144,\"publisher\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/how-the-pacific-ai-governance-policy-suite-supports-compliance-with-aca-section-1557\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/06\\\/ACA_Section1557.webp\",\"articleSection\":[\"Articles\"],\"inLanguage\":\"en\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/how-the-pacific-ai-governance-policy-suite-supports-compliance-with-aca-section-1557\\\/\",\"url\":\"https:\\\/\\\/pacific.ai\\\/how-the-pacific-ai-governance-policy-suite-supports-compliance-with-aca-section-1557\\\/\",\"name\":\"How the Pacific AI Governance Policy Suite Supports Compliance with ACA Section 1557 - Pacific AI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/how-the-pacific-ai-governance-policy-suite-supports-compliance-with-aca-section-1557\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/how-the-pacific-ai-governance-policy-suite-supports-compliance-with-aca-section-1557\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/06\\\/ACA_Section1557.webp\",\"datePublished\":\"2025-06-02T11:15:58+00:00\",\"dateModified\":\"2026-02-19T09:31:26+00:00\",\"description\":\"Pacific AI\u2019s Governance Policy Suite helps organizations comply with ACA Section 1557 by promoting nondiscrimination and equity in AI system design and use.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/how-the-pacific-ai-governance-policy-suite-supports-compliance-with-aca-section-1557\\\/#breadcrumb\"},\"inLanguage\":\"en\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/pacific.ai\\\/how-the-pacific-ai-governance-policy-suite-supports-compliance-with-aca-section-1557\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/how-the-pacific-ai-governance-policy-suite-supports-compliance-with-aca-section-1557\\\/#primaryimage\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/06\\\/ACA_Section1557.webp\",\"contentUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/06\\\/ACA_Section1557.webp\",\"width\":550,\"height\":440,\"caption\":\"ACA Section 1557 compliance illustration showing Pacific AI Governance Policy Suite supporting nondiscrimination, healthcare equity, and responsible AI governance under U.S. healthcare regulations.\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/how-the-pacific-ai-governance-policy-suite-supports-compliance-with-aca-section-1557\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/pacific.ai\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"How the Pacific AI Governance Policy Suite Supports Compliance with ACA Section 1557\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#website\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/\",\"name\":\"Pacific AI\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#organization\",\"name\":\"Pacific AI\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/06\\\/site_logo.svg\",\"contentUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/06\\\/site_logo.svg\",\"width\":182,\"height\":41,\"caption\":\"Pacific AI\"},\"image\":{\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/people\\\/Pacific-AI\\\/61566807347567\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/pacific-ai\\\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/#\\\/schema\\\/person\\\/8a2b4d5d75c8752d83ae6bb1d44e0186\",\"name\":\"David Talby\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/03\\\/David_portret-96x96.webp\",\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/03\\\/David_portret-96x96.webp\",\"contentUrl\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/wp-content\\\/uploads\\\/2025\\\/03\\\/David_portret-96x96.webp\",\"caption\":\"David Talby\"},\"description\":\"David Talby is a CTO at Pacific AI, helping healthcare &amp; life science companies put AI to good use. David is the creator of Spark NLP \u2013 the world\u2019s most widely used natural language processing library in the enterprise. He has extensive experience building and running web-scale software platforms and teams \u2013 in startups, for Microsoft\u2019s Bing in the US and Europe, and to scale Amazon\u2019s financial systems in Seattle and the UK. David holds a PhD in computer science and master\u2019s degrees in both computer science and business administration.\",\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/davidtalby\\\/\"],\"url\":\"https:\\\/\\\/pacific.ai\\\/staging\\\/3667\\\/author\\\/david\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"How the Pacific AI Governance Policy Suite Supports Compliance with ACA Section 1557 - Pacific AI","description":"Pacific AI\u2019s Governance Policy Suite helps organizations comply with ACA Section 1557 by promoting nondiscrimination and equity in AI system design and use.","robots":{"index":"noindex","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"og_locale":"en_US","og_type":"article","og_title":"How the Pacific AI Governance Policy Suite Supports Compliance with ACA Section 1557 - Pacific AI","og_description":"Pacific AI\u2019s Governance Policy Suite helps organizations comply with ACA Section 1557 by promoting nondiscrimination and equity in AI system design and use.","og_url":"https:\/\/pacific.ai\/how-the-pacific-ai-governance-policy-suite-supports-compliance-with-aca-section-1557\/","og_site_name":"Pacific AI","article_publisher":"https:\/\/www.facebook.com\/people\/Pacific-AI\/61566807347567\/","article_published_time":"2025-06-02T11:15:58+00:00","article_modified_time":"2026-02-19T09:31:26+00:00","og_image":[{"width":550,"height":440,"url":"https:\/\/pacific.ai\/wp-content\/uploads\/2025\/06\/ACA_Section1557.webp","type":"image\/webp"}],"author":"David Talby","twitter_card":"summary_large_image","twitter_misc":{"Written by":"David Talby","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/pacific.ai\/how-the-pacific-ai-governance-policy-suite-supports-compliance-with-aca-section-1557\/#article","isPartOf":{"@id":"https:\/\/pacific.ai\/how-the-pacific-ai-governance-policy-suite-supports-compliance-with-aca-section-1557\/"},"author":{"name":"David Talby","@id":"https:\/\/pacific.ai\/staging\/3667\/#\/schema\/person\/8a2b4d5d75c8752d83ae6bb1d44e0186"},"headline":"How the Pacific AI Governance Policy Suite Supports Compliance with ACA Section 1557","datePublished":"2025-06-02T11:15:58+00:00","dateModified":"2026-02-19T09:31:26+00:00","mainEntityOfPage":{"@id":"https:\/\/pacific.ai\/how-the-pacific-ai-governance-policy-suite-supports-compliance-with-aca-section-1557\/"},"wordCount":1144,"publisher":{"@id":"https:\/\/pacific.ai\/staging\/3667\/#organization"},"image":{"@id":"https:\/\/pacific.ai\/how-the-pacific-ai-governance-policy-suite-supports-compliance-with-aca-section-1557\/#primaryimage"},"thumbnailUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/06\/ACA_Section1557.webp","articleSection":["Articles"],"inLanguage":"en"},{"@type":"WebPage","@id":"https:\/\/pacific.ai\/how-the-pacific-ai-governance-policy-suite-supports-compliance-with-aca-section-1557\/","url":"https:\/\/pacific.ai\/how-the-pacific-ai-governance-policy-suite-supports-compliance-with-aca-section-1557\/","name":"How the Pacific AI Governance Policy Suite Supports Compliance with ACA Section 1557 - Pacific AI","isPartOf":{"@id":"https:\/\/pacific.ai\/staging\/3667\/#website"},"primaryImageOfPage":{"@id":"https:\/\/pacific.ai\/how-the-pacific-ai-governance-policy-suite-supports-compliance-with-aca-section-1557\/#primaryimage"},"image":{"@id":"https:\/\/pacific.ai\/how-the-pacific-ai-governance-policy-suite-supports-compliance-with-aca-section-1557\/#primaryimage"},"thumbnailUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/06\/ACA_Section1557.webp","datePublished":"2025-06-02T11:15:58+00:00","dateModified":"2026-02-19T09:31:26+00:00","description":"Pacific AI\u2019s Governance Policy Suite helps organizations comply with ACA Section 1557 by promoting nondiscrimination and equity in AI system design and use.","breadcrumb":{"@id":"https:\/\/pacific.ai\/how-the-pacific-ai-governance-policy-suite-supports-compliance-with-aca-section-1557\/#breadcrumb"},"inLanguage":"en","potentialAction":[{"@type":"ReadAction","target":["https:\/\/pacific.ai\/how-the-pacific-ai-governance-policy-suite-supports-compliance-with-aca-section-1557\/"]}]},{"@type":"ImageObject","inLanguage":"en","@id":"https:\/\/pacific.ai\/how-the-pacific-ai-governance-policy-suite-supports-compliance-with-aca-section-1557\/#primaryimage","url":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/06\/ACA_Section1557.webp","contentUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/06\/ACA_Section1557.webp","width":550,"height":440,"caption":"ACA Section 1557 compliance illustration showing Pacific AI Governance Policy Suite supporting nondiscrimination, healthcare equity, and responsible AI governance under U.S. healthcare regulations."},{"@type":"BreadcrumbList","@id":"https:\/\/pacific.ai\/how-the-pacific-ai-governance-policy-suite-supports-compliance-with-aca-section-1557\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/pacific.ai\/"},{"@type":"ListItem","position":2,"name":"How the Pacific AI Governance Policy Suite Supports Compliance with ACA Section 1557"}]},{"@type":"WebSite","@id":"https:\/\/pacific.ai\/staging\/3667\/#website","url":"https:\/\/pacific.ai\/staging\/3667\/","name":"Pacific AI","description":"","publisher":{"@id":"https:\/\/pacific.ai\/staging\/3667\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/pacific.ai\/staging\/3667\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en"},{"@type":"Organization","@id":"https:\/\/pacific.ai\/staging\/3667\/#organization","name":"Pacific AI","url":"https:\/\/pacific.ai\/staging\/3667\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/pacific.ai\/staging\/3667\/#\/schema\/logo\/image\/","url":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/06\/site_logo.svg","contentUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/06\/site_logo.svg","width":182,"height":41,"caption":"Pacific AI"},"image":{"@id":"https:\/\/pacific.ai\/staging\/3667\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/people\/Pacific-AI\/61566807347567\/","https:\/\/www.linkedin.com\/company\/pacific-ai\/"]},{"@type":"Person","@id":"https:\/\/pacific.ai\/staging\/3667\/#\/schema\/person\/8a2b4d5d75c8752d83ae6bb1d44e0186","name":"David Talby","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/03\/David_portret-96x96.webp","url":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/03\/David_portret-96x96.webp","contentUrl":"https:\/\/pacific.ai\/staging\/3667\/wp-content\/uploads\/2025\/03\/David_portret-96x96.webp","caption":"David Talby"},"description":"David Talby is a CTO at Pacific AI, helping healthcare &amp; life science companies put AI to good use. David is the creator of Spark NLP \u2013 the world\u2019s most widely used natural language processing library in the enterprise. He has extensive experience building and running web-scale software platforms and teams \u2013 in startups, for Microsoft\u2019s Bing in the US and Europe, and to scale Amazon\u2019s financial systems in Seattle and the UK. David holds a PhD in computer science and master\u2019s degrees in both computer science and business administration.","sameAs":["https:\/\/www.linkedin.com\/in\/davidtalby\/"],"url":"https:\/\/pacific.ai\/staging\/3667\/author\/david\/"}]}},"_links":{"self":[{"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/posts\/959","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/comments?post=959"}],"version-history":[{"count":12,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/posts\/959\/revisions"}],"predecessor-version":[{"id":2041,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/posts\/959\/revisions\/2041"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/media\/961"}],"wp:attachment":[{"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/media?parent=959"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/categories?post=959"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/pacific.ai\/staging\/3667\/wp-json\/wp\/v2\/tags?post=959"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}