Introduction
In today’s rapidly evolving AI landscape, organizations face a pressing business challenge: ensuring that their AI products not only operate legally, but also comply with the myriad acceptable use policies imposed by underlying service providers. Each leading AI platform—open-source libraries, commercial APIs, and cloud‐based ML services—maintains its own set of “acceptable use” restrictions. At the same time, federal and state laws, such as privacy statutes (e.g., HIPAA, CCPA) and emerging “deepfake” bans, continue to evolve.
Staying compliant demands continuous monitoring of new legislation—such as privacy or deepfake statutes—and provider updates to prohibit specific content categories (e.g., disallowed health advice, illicit behavior, biased decisions). Manually aggregating all these requirements into a single, actionable policy suite risks oversight, leaving an AI deployment vulnerable to regulatory enforcement or contract violations. To mitigate these risks, model cards (concise, structured “nutrition labels” for AI) have emerged as a standardized mechanism to package transparency information.
In this post, we:
1. Explain the ONC’s HTI-1 “Algorithm Transparency” rule in detail.
2. Compare two proposed CHAI Applied Model Card and DIHI’s “Model Facts” v2 label — highlighting their structure, content requirements, and alignment with HTI-1’s 31 source attributes.
3. Provide a summary and recommended next steps for healthcare organizations, with a special call-out to the Pacific AI Policy Suite, which includes an AI Transparency Policy ensuring that adopters conform to all relevant legislation, regulation, and industry standards.
HTI-1 Rule Overview (Algorithm Transparency)
The HTI-1 (Health Data, Technology, and Interoperability: Certification Program) final rule, issued by ONC on December 13, 2023, implements key provisions of the 21st Century Cures Act to modernize the Health IT Certification Program. A centerpiece of HTI-1 is its Algorithm Transparency requirement, which takes effect on December 31, 2024.
Under HTI-1, any “Predictive Decision Support Intervention” (Predictive DSI) embedded in Certified Health IT must provide end users with a set of “source attributes”—technical performance and development details that allow clinicians and health system administrators to assess the fairness, appropriateness, validity, effectiveness, and safety (FAVES) of an AI model before relying on it in patient care. Specifically, ONC requires EHR vendors distributing Predictive DSIs to disclose 31 distinct attributes, including but not limited to:
- Model Identity: Model name, version, developer name, release date.
- Intended Use & Use Cases: Specific clinical scenarios, patient populations, and care settings for which the model was developed.
- Training & Validation Data: Source and characteristics of datasets used during model development (e.g., number of patients, institutions, demographics, data collection period).
- Performance Metrics: Primary evaluation metrics (e.g., AUROC, sensitivity, specificity), stratified by key subgroups (e.g., race, age, sex) to highlight potential performance variability.
- Bias & Limitations: Known biases, out-of-scope use cases, failure modes, and disclaimers about generalization to new populations.
- Risk Mitigation & Monitoring: Post-deployment monitoring plans, retraining procedures, model update cadence, and mechanisms for user feedback.
- Regulatory & Compliance Status: Certification edition, evidence of compliance with applicable FDA guidance (if a device), and alignment with privacy laws (e.g., HIPAA).
For a complete list of all 31 source attributes, see ONC’s HTI-1 Final Rule Overview Q&A (PDF). ONC characterizes this requirement as a first-of-its-kind nationwide mandate for algorithm transparency in healthcare.
Key takeaways from the HTI-1 rule:
- Applicability: Only predictive DSIs embedded within certified Health IT modules fall under the 31-attribute disclosure rule.
- User-Accessible Model Card: Vendors must make the 31 source attributes publicly available through a model card or “Model Facts” – style label, accessible via the EHR user interface or a publicly accessible website.
- Update & Revision: Whenever a model’s version is updated, the source attributes must be revised and reposted.
- Enforcement: ONC will perform post-market surveillance to ensure listed attributes remain accurate. Failure to comply can jeopardize Health IT certification.
By mandating these disclosures, ONC intends to promote fairness, patient safety, health equity, and clinician trust in predictive AI/ML tools.
CHAI Applied Model Card Approach
The Coalition for Health AI (CHAI)—a public – private consortium of health systems, academic medical centers, and technology developers—has developed an Applied Model Card template aligned closely with HTI-1’s 31 source attributes. CHAI’s Model Card functions as a “nutrition label” for healthcare AI, enabling procurement teams, clinicians, and IT leaders to evaluate model transparency at a glance.
Key components of the CHAI Applied Model Card include:
- Developer & Model Identification: Model Name & Version, Developer/Organization, Release Date & Last Update.
- Intended Use & Target Population: Clinical Indication, Target Population, Setting & Workflow Integration.
- Performance & Validation: Primary Metrics, Validation Cohorts, Subgroup Analyses.
- Data & Methodology: Training Data Sources, Feature Engineering, Model Architecture.
- Bias & Limitations: Known Biases, Out-of-Scope Use Cases.
- Deployment & Monitoring: Versioning & Update Cadence, Post-Deployment Monitoring, User Feedback Loops.
- Regulatory & Compliance: Certification Status, Privacy & Security, Ethical Oversight.
DIHI “Model Facts” v2 Label for HTI-1 Compliance
The Duke Institute for Health Innovation (DIHI) pioneered the Model Facts Label in early 2020 and updated it in January 2025 to align with HTI-1’s requirements. DIHI’s “Model Facts” v2 label is provided under a Creative Commons Attribution 4.0 license, allowing any organization to adapt it for their AI products.
Key elements of the DIHI “Model Facts” v2 label include:
- Model Overview & Developers: Model Name, Version & Release Date, Developing Institution, Model Type & Method.
- Intended Use & Population: Clinical Purpose, Population & Setting, Out-of-Scope Use Cases.
- Data & Training: Number of Patients & Encounters, Data Scope, Data Governance.
- Performance & Validation: Primary Metrics, Validation Streams, Subgroup Performance.
- Bias & Limitations: Known Biases, Missing Data Impact, Risk Mitigation.
- Deployment & Risk Management: Version Control & Updates, Monitoring & Calibration, Feedback Mechanism.
- Regulatory & Certification: HTI-1 Source Attributes Checklist, Privacy Compliance, Ethical Oversight.
- References & Further Reading: Links to peer-reviewed publications, ONC HTI-1 Final Rule, External validation studies.
Comparison: CHAI vs. DIHI Approaches
Below is a structured comparison of the CHAI Applied Model Card and DIHI’s Model Facts v2 label, highlighting how each addresses HTI-1’s key transparency requirements.
Feature | CHAI Applied Model Card | DIHI Model Facts v2 |
---|---|---|
Format & Distribution | Web-based template (GitHub) with instructions for EHR UI integration. JSON/Markdown–friendly for direct embedding into product documentation. Recommended embedding via hyperlink or native EHR “Help” tab, with downloadable PDF option. | Single-page PDF template (Creative Commons). Designed for print or digital inclusion as a “label” in sales decks, websites, and EHR UI. Prescriptive format that enumerates all 31 HTI-1 attributes in a side column. |
Model Identity & Versioning | Lists Model Name, Version, Developer, Release Date. Recommends semantic versioning and linking to GitHub changelog. | Header block with Model Name, Version, Release Date, Contact Email. Explicit row for Version History & Change Log. |
Intended Use & Target Population | Detailed subsection specifying Clinical Indication, Target Population, Care Setting, Intended Workflow Integration. Calls out Out-of-Scope Uses. | Rows for Clinical Purpose, Target Population & Eligibility, Caveats & Contraindications. |
Training & Validation Data | Describes Training Data Sources (number of patients, institutions, date ranges) and Validation Cohorts. Encourages separate appendix for full data dictionaries. | Training Data & Sources block listing Number of Patients, Geographic Coverage, Timeframe, Data Types. Validation Strategy block with Internal Holdout and External Validation details. |
Performance & Metrics | Provides Primary AUROC, Sensitivity, Specificity plus Stratified Metrics (Age, Gender, Race). Encourages 95% CI and calibration curves in appendix. | Performance Metrics grid with Metric Name, Value, 95% CI, Subgroup Performance. Row for Primary Threshold & Operating Point. |
Bias & Limitations | Explicit section listing Known Performance Disparities, Data Gaps, Potential Sources of Bias. Calls out underrepresented groups. | Bias & Limitations block listing Subgroup Underperformance, Missing Data Impact, Known Confounders. Risk Mitigation row mapping to HTI-1 attributes. |
Deployment & Monitoring | Outlines Version Control, Retraining Schedule, Real-time Monitoring Dashboards, Feedback Loops. Recommends automated drift detection scripts. | Rows for Versioning & Updates with Next Retraining Date, Post-Deployment Monitoring (e.g., monthly calibration checks). Feedback Mechanism row. |
Regulatory & Certification Status | Lists HTI-1 Certification Status, FDA Submission, Data Use Agreements, Privacy Compliance, Ethical Oversight. Recommends hyperlink to ONC Certification ID. | Certification & Compliance block with HTI-1 status, FDA 510(k) number, Privacy & Security info. Embeds 31 Source Attributes Checklist. |
References & External Links | Appendix of references to peer-reviewed publications, GitHub Repo, External Validation Studies, ONC HTI-1 Rule Text. Suggests live dashboard links. | References section listing publications, ONC HTI-1 Final Rule PDF, DIHI Model Facts v1 publication, External Validation Publications. Hyperlinks included. |
Summary & Next Steps for Organizations
Healthcare organizations and Health IT developers that aim to comply with HTI-1’s algorithm transparency requirements have two robust, publicly available model-card templates at their disposal: CHAI’s Applied Model Card and DIHI’s Model Facts v2 label. Both templates ensure that all 31 source attributes mandated by HTI-1 are captured—covering model identity, intended use, performance, bias mitigations, deployment plans, and regulatory status.
Key Next Actions:
- Inventory Existing Predictive DSIs: Identify all AI/ML tools embedded in your EHR or clinical workflows that meet the “predictive DSI” definition under HTI-1. Catalog each model’s current documentation.
- Select & Customize a Model Card Template: For organizations with developer capacity and a DevOps pipeline, adopt CHAI’s Applied Model Card. Customize JSON schemas and embed links to validation dashboards. For smaller teams, use DIHI’s Model Facts v2 label and complete the 31-attribute checklist row by row.
- Populate & Validate Source Attributes: Form cross-functional teams to gather training/validation data details, subgroup analyses, and performance metrics. Conduct subgroup performance evaluations and document results clearly.
- Integrate Model Card into Deployment Pipeline: Embed model cards in the EHR via hyperlinks or inline viewers. Ensure that model updates trigger automatic updates to the model card.
- Implement Post-Deployment Monitoring: Build dashboards to track calibration drift, false positive/negative rates, and clinician feedback. Update model cards quarterly or as required by HTI-1.
- Leverage the Pacific AI Policy Suite: Pacific AI’s AI Transparency Policy ensures that adopters maintain a living model card repository that meets HTI-1’s requirements, FDA guidance, HIPAA/CCPA, and other industry standards. Subscribing to Pacific AI’s updates guarantees compliance with evolving transparency mandates.
By integrating Pacific AI’s AI Transparency Policy, organizations can ensure that their transparency efforts remain aligned with evolving federal regulations, state laws, and industry best practices.
Conclusion
AI model transparency in healthcare is no longer optional; it is a regulatory requirement under ONC’s HTI-1 rule and an ethical imperative for patient safety and equity. By leveraging CHAI’s Applied Model Card or DIHI’s Model Facts v2 label and adopting Pacific AI’s AI Transparency Policy, organizations can ensure their AI products are transparent, fair, and trustworthy.
Next Steps:
- Choose and customize a model card template.
- Form a cross-functional team to collect source-attribute data.
- Embed model cards into clinical workflows.
- Subscribe to Pacific AI’s Policy Suite for ongoing transparency-policy updates.
- Conduct regular audits to verify HTI-1 compliance.