How the Pacific AI Governance Policy Suite Supports Compliance with ACA Section 1557

Artificial intelligence is increasingly used in healthcare systems, from clinical decision support tools to patient engagement platforms and insurance claims processing. While AI can improve efficiency and quality, it also brings serious risks: automated systems can unintentionally discriminate against patients based on race, ethnicity, sex, language, age, or disability. That’s why any AI system used in healthcare must be aligned with Section 1557 of the Affordable Care Act (ACA) — a critical U.S. law that prohibits discrimination in health programs and activities.

In this blog post, we introduce Section 1557 and explain how the Pacific AI Governance Policy Suite provides the policies and controls organizations need to demonstrate compliance. We include detailed examples of how Section 1557 applies to AI, and offer a table mapping each regulatory requirement to the relevant Pacific AI policy and clause.

What Is Section 1557 of the ACA?

Section 1557 is the non-discrimination provision of the Affordable Care Act. It applies to any health program or activity that receives federal financial assistance—such as Medicare, Medicaid, or funding from the Department of Health and Human Services (HHS).
It prohibits discrimination on the basis of:

  • Race
  • Color
  • National origin (including language access)
  • Sex (including sexual orientation and gender identity)
  • Age
  • Disability

The regulation was strengthened in 2024 to explicitly include digital systems and automated decision-making tools. That means AI used in clinical, operational, or administrative healthcare settings must not create or worsen disparities in access, quality, or outcomes.

Examples of where AI can run afoul of Section 1557 include systems that unintentionally exclude or misjudge certain patients due to biased training data or poor interface design:

  • A triage chatbot trained on biased data that under-prioritizes Black or Latino patients
  • An appointment scheduling system that doesn’t work with screen readers
  • An insurance eligibility algorithm that penalizes patients with non-English language preferences

There have already been several high-profile examples where healthcare companies or their technology vendors faced serious consequences for violating Section 1557 or similar anti-discrimination laws. For instance, in 2020, UnitedHealthcare faced scrutiny after an investigation found that an algorithm it used to allocate care coordination resources disproportionately favored white patients over Black patients, even when both had the same level of need.

The biased algorithm led to unequal access to care, prompting lawsuits and renewed federal oversight. In another case, a large hospital system implemented a patient portal that was not compatible with screen readers, effectively excluding blind patients from accessing their health records—a violation of disability access rules under Section 504 and Section 1557. These incidents illustrate that non-compliance isn’t just a theoretical risk: it can lead to lawsuits, regulatory penalties, and reputational damage.

The Pacific AI Governance Policy Suite is a free, open-source set of AI policies designed to help organizations align with U.S. laws and ethical frameworks. Updated quarterly, the suite includes specific controls that support:

  • Fairness
  • Accessibility
  • Risk mitigation
  • Transparency
  • Documentation

Here’s how each ACA 1557 requirement maps to the Pacific AI suite:

ACA 1557 RequirementPacific AI PolicyClause
Prevent racial and ethnic discrimination in outcomesAI Fairness Policy§6.2, §6.3
Support for language access in AI interfaces and outputsAI Transparency Policy§6.3
Non-discrimination based on sex, gender identity, or sexual orientationAI Fairness Policy§4.1, §6.2
Accessibility for people with disabilitiesAI Safety Policy§5; AI Transparency Policy – §4
Avoiding age-related bias in models and dataAI Fairness Policy§5.3
Inclusive design and usability testingAI System Lifecycle Policy§4, §7.3
Regular audits for bias and fairnessAI Fairness Policy§6; AI Risk Management Policy – §5
Documented human oversight and appeal pathwaysAI Safety Potdcy§4.1, §7; AI Transparency Potdcy – §6.1
Risk classification for high-impact use casesAI Risk Management Potdcy§4, §6

Detailed Example: Fairness in Clinical AI Tools

Imagine a hospital uses an AI tool to predict which patients are at risk for complications after surgery. If the model was trained on data that under-represents patients from certain racial or socioeconomic backgrounds, it may give less accurate predictions for those patients. This could lead to unequal access to post-surgical care or preventative interventions—an outcome that directly violates Section 1557.

The Pacific AI suite helps mitigate this by requiring:

  • Fairness testing disaggregated by race, gender, and language (AI Fairness Policy §6.2)
  • Documentation of training data and audit results (AI System Lifecycle Policy §7.1)
  • Human review before deployment in clinical settings (AI Safety Policy §4.1)

These requirements create a repeatable framework for equitable design and use of AI in healthcare.

Conclusion

AI systems used in healthcare must be designed not only for accuracy and efficiency—but also for equity. Section 1557 of the ACA sets a clear legal expectation that no patient should be excluded, harmed, or disadvantaged by AI based on race, language, gender identity, age, or disability. As AI technologies become more deeply embedded in care delivery, payment systems, and patient communication tools, the risks of discrimination will only increase—especially when these systems are opaque or trained on biased data.

Organizations cannot treat compliance with ACA 1557 as a one-time review or checklist exercise. Instead, they must take a systematic, policy-driven approach that embeds fairness, accessibility, and transparency into every stage of AI system development and deployment. This is where the Pacific AI Governance Policy Suite provides tremendous value. It translates legal obligations into operational procedures, role-based responsibilities, and documented audit trails that support both proactive prevention and responsive mitigation.

Adopting the Pacific AI suite not only helps organizations align with ACA 1557, it also improves internal accountability and public trust. It enables healthcare providers, payers, and health tech companies to demonstrate their commitment to equitable AI—not just in words, but in policy and practice. By doing so, they create safer, more inclusive healthcare systems that serve the full diversity of their patient populations.

Download the full Pacific AI suite at https://pacific.ai

Need help mapping your AI systems to ACA 1557? Contact [email protected]